Tpetra parallel linear algebra
Version of the Day
|
Base class for distributed Tpetra objects that support data redistribution. More...
#include <Tpetra_DistObject_decl.hpp>
Public Member Functions | |
Constructors and destructor | |
DistObject (const Teuchos::RCP< const map_type > &map) | |
Constructor. More... | |
DistObject (const DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic > &rhs) | |
Copy constructor. More... | |
virtual | ~DistObject () |
Destructor (virtual for memory safety of derived classes). More... | |
Public methods for redistributing data | |
void | doImport (const SrcDistObject &source, const Import< LocalOrdinal, GlobalOrdinal, Node > &importer, CombineMode CM) |
Import data into this object using an Import object ("forward mode"). More... | |
void | doExport (const SrcDistObject &source, const Export< LocalOrdinal, GlobalOrdinal, Node > &exporter, CombineMode CM) |
Export data into this object using an Export object ("forward mode"). More... | |
void | doImport (const SrcDistObject &source, const Export< LocalOrdinal, GlobalOrdinal, Node > &exporter, CombineMode CM) |
Import data into this object using an Export object ("reverse mode"). More... | |
void | doExport (const SrcDistObject &source, const Import< LocalOrdinal, GlobalOrdinal, Node > &importer, CombineMode CM) |
Export data into this object using an Import object ("reverse mode"). More... | |
Attribute accessor methods | |
bool | isDistributed () const |
Whether this is a globally distributed object. More... | |
virtual Teuchos::RCP< const map_type > | getMap () const |
The Map describing the parallel distribution of this object. More... | |
I/O methods | |
void | print (std::ostream &os) const |
Print this object to the given output stream. More... | |
Implementation of Teuchos::Describable | |
virtual std::string | description () const |
One-line descriptiion of this object. More... | |
virtual void | describe (Teuchos::FancyOStream &out, const Teuchos::EVerbosityLevel verbLevel=Teuchos::Describable::verbLevel_default) const |
Print a descriptiion of this object to the given output stream. More... | |
Protected Types |
Protected Member Functions | |
virtual size_t | constantNumberOfPackets () const |
Whether the implementation's instance promises always to have a constant number of packets per LID, and if so, how many packets per LID there are. More... | |
virtual void | doTransfer (const SrcDistObject &src, CombineMode CM, size_t numSameIDs, const Teuchos::ArrayView< const local_ordinal_type > &permuteToLIDs, const Teuchos::ArrayView< const local_ordinal_type > &permuteFromLIDs, const Teuchos::ArrayView< const local_ordinal_type > &remoteLIDs, const Teuchos::ArrayView< const local_ordinal_type > &exportLIDs, Distributor &distor, ReverseOption revOp) |
Redistribute data across memory images. More... | |
virtual void | createViews () const |
Hook for creating a const view. More... | |
virtual void | createViewsNonConst (KokkosClassic::ReadWriteOption rwo) |
Hook for creating a nonconst view. More... | |
virtual void | releaseViews () const |
Hook for releasing views. More... | |
Methods implemented by subclasses and used by doTransfer(). | |
The doTransfer() method uses the subclass' implementations of these methods to implement data transfer. Subclasses of DistObject must implement these methods. This is an instance of the Template Method Pattern. ("Template" here doesn't mean "C++ template"; it means "pattern with holes that are filled in by the subclass' method implementations.") | |
virtual bool | checkSizes (const SrcDistObject &source)=0 |
Compare the source and target (this) objects for compatibility. More... | |
virtual bool | useNewInterface () |
Whether lass (???) implements old or new interface. More... | |
virtual void | copyAndPermute (const SrcDistObject &source, size_t numSameIDs, const Teuchos::ArrayView< const local_ordinal_type > &permuteToLIDs, const Teuchos::ArrayView< const local_ordinal_type > &permuteFromLIDs) |
Perform copies and permutations that are local to this process. More... | |
virtual void | copyAndPermuteNew (const SrcDistObject &source, size_t numSameIDs, const Kokkos::View< const local_ordinal_type *, execution_space > &permuteToLIDs, const Kokkos::View< const local_ordinal_type *, execution_space > &permuteFromLIDs) |
virtual void | packAndPrepare (const SrcDistObject &source, const Teuchos::ArrayView< const local_ordinal_type > &exportLIDs, Teuchos::Array< packet_type > &exports, const Teuchos::ArrayView< size_t > &numPacketsPerLID, size_t &constantNumPackets, Distributor &distor) |
Perform any packing or preparation required for communication. More... | |
virtual void | packAndPrepareNew (const SrcDistObject &source, const Kokkos::View< const local_ordinal_type *, execution_space > &exportLIDs, Kokkos::View< packet_type *, execution_space > &exports, const Kokkos::View< size_t *, execution_space > &numPacketsPerLID, size_t &constantNumPackets, Distributor &distor) |
virtual void | unpackAndCombine (const Teuchos::ArrayView< const local_ordinal_type > &importLIDs, const Teuchos::ArrayView< const packet_type > &imports, const Teuchos::ArrayView< size_t > &numPacketsPerLID, size_t constantNumPackets, Distributor &distor, CombineMode CM) |
Perform any unpacking and combining after communication. More... | |
virtual void | unpackAndCombineNew (const Kokkos::View< const local_ordinal_type *, execution_space > &importLIDs, const Kokkos::View< const packet_type *, execution_space > &imports, const Kokkos::View< size_t *, execution_space > &numPacketsPerLID, size_t constantNumPackets, Distributor &distor, CombineMode CM) |
Protected Attributes | |
Teuchos::RCP< const map_type > | map_ |
The Map over which this object is distributed. More... | |
Typedefs | |
typedef Kokkos::Details::ArithTraits< Packet >::val_type | packet_type |
The type of each datum being sent or received in an Import or Export. More... | |
typedef LocalOrdinal | local_ordinal_type |
The type of local indices. More... | |
typedef GlobalOrdinal | global_ordinal_type |
The type of global indices. More... | |
typedef Node | node_type |
The Kokkos Node type. More... | |
typedef Node::device_type | device_type |
The Kokkos Device type. More... | |
typedef device_type::execution_space | execution_space |
The Kokkos execution space. More... | |
typedef Map< local_ordinal_type, global_ordinal_type, node_type > | map_type |
The type of the Map specialization to use with this class. More... | |
Methods for use only by experts | |
virtual void | removeEmptyProcessesInPlace (const Teuchos::RCP< const map_type > &newMap) |
Remove processes which contain no elements in this object's Map. More... | |
template<class PT , class LO , class GO , class NT > | |
void | removeEmptyProcessesInPlace (Teuchos::RCP< Tpetra::DistObject< PT, LO, GO, NT > > &input, const Teuchos::RCP< const Map< LO, GO, NT > > &newMap) |
template<class PT , class LO , class GO , class NT > | |
void | removeEmptyProcessesInPlace (Teuchos::RCP< Tpetra::DistObject< PT, LO, GO, NT > > &input) |
Base class for distributed Tpetra objects that support data redistribution.
DistObject is a base class for all Tpetra distributed global objects, including CrsMatrix and MultiVector. It provides the basic mechanisms and interface specifications for importing and exporting operations using Import and Export objects.
LocalOrdinal | The type of local IDs. Same as Map's LocalOrdinal template parameter. This should be an integer type, preferably signed. |
GlobalOrdinal | The type of global IDs. Same as Map's GlobalOrdinal template parameter. Defaults to the same type as LocalOrdinal . This should also be an integer type, preferably signed. |
Node | Same as Map's Node template parameter. Defaults to the default Kokkos Node type. |
classic | DO NOT SET THIS EXPLICITLY. This template parameter only exists for backwards compatibility. It must always be false. |
Most Tpetra users will only use this class' methods to perform data redistribution for subclasses such as CrsMatrix, MultiVector, and Vector. DistObject provides four methods for redistributing data: two versions of doImport()
, and two versions of doExport()
. Import operations redistribute data from a nonoverlapping (one-to-one) distribution to a possibly overlapping distribution. Export operations redistribute data from a possibly overlapping distribution to a nonoverlapping (one-to-one) distribution. Once you have precomputed a data redistribution plan (an Import or Export object), you may use the plan to redistribute an input object's data into this object, by calling one of these methods. The input object of doImport()
or doExport()
is always the "source" of the redistribution operation, which sends the data. The *this
object is the target, which receives and combines the data. It has the distribution given by this->getMap()
.
Both Import and Export operations occur in two modes: forward and reverse. Forward mode is the usual case, where you are calling a method with its matching plan type (doImport()
for an Import plan, or doExport()
for an Export plan). In that case, the input DistObject must have the same Map as the source Map of the plan, and the target DistObject must have the same Map as the target Map of the plan. Reverse mode is also possible, where you call a method with the opposite plan type (doImport()
for an Export plan, or doExport()
for an Import plan). In that case, the source DistObject's Map must be the same as the target Map of the plan, and the target DistObject's Map must be the same as the source Map of the plan. If you call doImport()
, we still call this an Import operation, even if you are using an Export plan in reverse. Similarly, if you call doExport()
, we call this an Export operation.
Most users will want to use forward mode. However, reverse mode is useful for some applications. For example, suppose you are solving a nonlinear partial differential equation using the finite element method, with Newton's method for the nonlinear equation. When assembling into a vector, it is convenient and efficient to do local assembly first into a vector with an overlapping distribution, then do global assembly via forward mode Export into a vector with a nonoverlapping distribution. After the linear solve, you may want to bring the resulting nonoverlapping distribution vector back to the overlapping distribution for another update phase. This would be a reverse mode Import, using the precomputed Export object.
Another use case for reverse mode is in CrsMatrix, for the transpose version of distributed sparse matrix-vector multiply ("mat-vec"). Non-transpose mat-vec (a function from the domain Map to the range Map) does an Import to bring in the source vector's data from the domain Map to the column Map of the sparse matrix, and an Export (if necessary) to bring the results from the row Map of the sparse matrix to the range Map. Transpose mat-vec (a function from the range Map to the domain Map) uses these precomputed Import and Export objects in reverse mode: first the Export in reverse mode to Import the source vector's data to the row Map, and then the Import in reverse mode to Export the results to the domain Map. Reverse mode lets us reuse the precomputed data redistribution plans for the transpose case.
If you want to implement your own DistObject subclass, you should start by implementing the four pure virtual methods: checkSizes(), copyAndPermute(), packAndPrepare(), and unpackAndCombine(). The implementation of doTransfer() includes documentation that explains how DistObject uses those methods to do data redistribution.
If you are writing a DistObject class that uses Kokkos compute buffers and aims to work for any Kokkos Node type, you should also implement the three hooks that create and release views: createViews(), createViewsNonConst(), and releaseViews(). The default implementation of these hooks does nothing. The documentation of these methods explains different ways you might choose to implement them.
DistObject implements SrcDistObject, because we presume that if an object can be the target of an Import or Export, it can also be the source of an Import or Export.
Definition at line 190 of file Tpetra_DistObject_decl.hpp.
typedef Kokkos::Details::ArithTraits<Packet>::val_type Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::packet_type |
The type of each datum being sent or received in an Import or Export.
Note that this type does not always correspond to the Scalar
template parameter of subclasses.
Definition at line 202 of file Tpetra_DistObject_decl.hpp.
typedef LocalOrdinal Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::local_ordinal_type |
The type of local indices.
Definition at line 204 of file Tpetra_DistObject_decl.hpp.
typedef GlobalOrdinal Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::global_ordinal_type |
The type of global indices.
Definition at line 206 of file Tpetra_DistObject_decl.hpp.
typedef Node Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::node_type |
The Kokkos Node type.
Definition at line 208 of file Tpetra_DistObject_decl.hpp.
typedef Node::device_type Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::device_type |
The Kokkos Device type.
Definition at line 211 of file Tpetra_DistObject_decl.hpp.
typedef device_type::execution_space Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::execution_space |
The Kokkos execution space.
Definition at line 213 of file Tpetra_DistObject_decl.hpp.
typedef Map<local_ordinal_type, global_ordinal_type, node_type> Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::map_type |
The type of the Map specialization to use with this class.
Definition at line 221 of file Tpetra_DistObject_decl.hpp.
|
protected |
Whether the data transfer should be performed in forward or reverse mode.
"Reverse mode" means calling doExport() with an Import object, or calling doImport() with an Export object. "Forward mode" means calling doExport() with an Export object, or calling doImport() with an Import object.
Definition at line 449 of file Tpetra_DistObject_decl.hpp.
|
explicit |
Constructor.
Definition at line 59 of file Tpetra_DistObject_def.hpp.
Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::DistObject | ( | const DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic > & | rhs | ) |
Copy constructor.
Definition at line 111 of file Tpetra_DistObject_def.hpp.
|
virtual |
Destructor (virtual for memory safety of derived classes).
Definition at line 117 of file Tpetra_DistObject_def.hpp.
void Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::doImport | ( | const SrcDistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic > & | source, |
const Import< LocalOrdinal, GlobalOrdinal, Node > & | importer, | ||
CombineMode | CM | ||
) |
Import data into this object using an Import object ("forward mode").
The input DistObject is always the source of the data redistribution operation, and the *this
object is always the target.
If you don't know the difference between forward and reverse mode, then you probably want forward mode. Use this method with your precomputed Import object if you want to do an Import, else use doExport() with a precomputed Export object.
source | [in] The "source" object for redistribution. |
importer | [in] Precomputed data redistribution plan. Its source Map must be the same as the input DistObject's Map, and its target Map must be the same as this->getMap() . |
CM | [in] How to combine incoming data with the same global index. |
Definition at line 246 of file Tpetra_DistObject_def.hpp.
void Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::doExport | ( | const SrcDistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic > & | source, |
const Export< LocalOrdinal, GlobalOrdinal, Node > & | exporter, | ||
CombineMode | CM | ||
) |
Export data into this object using an Export object ("forward mode").
The input DistObject is always the source of the data redistribution operation, and the *this
object is always the target.
If you don't know the difference between forward and reverse mode, then you probably want forward mode. Use this method with your precomputed Export object if you want to do an Export, else use doImport() with a precomputed Import object.
source | [in] The "source" object for redistribution. |
exporter | [in] Precomputed data redistribution plan. Its source Map must be the same as the input DistObject's Map, and its target Map must be the same as this->getMap() . |
CM | [in] How to combine incoming data with the same global index. |
Definition at line 277 of file Tpetra_DistObject_def.hpp.
void Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::doImport | ( | const SrcDistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic > & | source, |
const Export< LocalOrdinal, GlobalOrdinal, Node > & | exporter, | ||
CombineMode | CM | ||
) |
Import data into this object using an Export object ("reverse mode").
The input DistObject is always the source of the data redistribution operation, and the *this
object is always the target.
If you don't know the difference between forward and reverse mode, then you probably want forward mode. Use the version of doImport() that takes a precomputed Import object in that case.
source | [in] The "source" object for redistribution. |
exporter | [in] Precomputed data redistribution plan. Its target Map must be the same as the input DistObject's Map, and its source Map must be the same as this->getMap() . (Note the difference from forward mode.) |
CM | [in] How to combine incoming data with the same global index. |
Definition at line 308 of file Tpetra_DistObject_def.hpp.
void Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::doExport | ( | const SrcDistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic > & | source, |
const Import< LocalOrdinal, GlobalOrdinal, Node > & | importer, | ||
CombineMode | CM | ||
) |
Export data into this object using an Import object ("reverse mode").
The input DistObject is always the source of the data redistribution operation, and the *this
object is always the target.
If you don't know the difference between forward and reverse mode, then you probably want forward mode. Use the version of doExport() that takes a precomputed Export object in that case.
source | [in] The "source" object for redistribution. |
importer | [in] Precomputed data redistribution plan. Its target Map must be the same as the input DistObject's Map, and its source Map must be the same as this->getMap() . (Note the difference from forward mode.) |
CM | [in] How to combine incoming data with the same global index. |
Definition at line 340 of file Tpetra_DistObject_def.hpp.
bool Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::isDistributed | ( | ) | const |
Whether this is a globally distributed object.
For a definition of "globally distributed" (and its opposite, "locally replicated"), see the documentation of Map's isDistributed() method.
Definition at line 372 of file Tpetra_DistObject_def.hpp.
|
inlinevirtual |
The Map describing the parallel distribution of this object.
Note that some Tpetra objects might be distributed using multiple Map objects. For example, CrsMatrix has both a row Map and a column Map. It is up to the subclass to decide which Map to use when invoking the DistObject constructor.
Definition at line 347 of file Tpetra_DistObject_decl.hpp.
void Tpetra::DistObject< Packet, LocalOrdinal, GlobalOrdinal, Node, classic >::print | ( | std::ostream & | os | ) | const |
Print this object to the given output stream.
We generally assume that all MPI processes can print to the given stream.
Definition at line 918 of file Tpetra_DistObject_def.hpp.
|
virtual |
One-line descriptiion of this object.
We declare this method virtual so that subclasses of DistObject may override it.
Reimplemented in Tpetra::CrsMatrix< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::MultiVector< Scalar, LO, GO, Node >, Tpetra::CrsGraph< LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::CrsGraph< LO, GO, node_type >, Tpetra::Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, and Tpetra::Experimental::BlockCrsMatrix< Scalar, LO, GO, Node >.
Definition at line 123 of file Tpetra_DistObject_def.hpp.
|
virtual |
Print a descriptiion of this object to the given output stream.
We declare this method virtual so that subclasses of Distobject may override it.
Reimplemented in Tpetra::CrsMatrix< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::MultiVector< Scalar, LO, GO, Node >, Tpetra::CrsGraph< LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::CrsGraph< LO, GO, node_type >, Tpetra::Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, and Tpetra::Experimental::BlockCrsMatrix< Scalar, LO, GO, Node >.
Definition at line 143 of file Tpetra_DistObject_def.hpp.
|
virtual |
Remove processes which contain no elements in this object's Map.
On input, this object is distributed over the Map returned by getMap() (the "original Map," with its communicator, the "original communicator"). The input newMap
of this method must be the same as the result of calling getMap()->removeEmptyProcesses()
. On processes in the original communicator which contain zero elements ("excluded processes," as opposed to "included processes"), the input newMap
must be Teuchos::null
(which is what getMap()->removeEmptyProcesses()
returns anyway).
On included processes, reassign this object's Map (that would be returned by getMap()) to the input newMap
, and do any work that needs to be done to restore correct semantics. On excluded processes, free any data that needs freeing, and do any other work that needs to be done to restore correct semantics.
This method has collective semantics over the original communicator. On exit, the only method of this object which is safe to call on excluded processes is the destructor. This implies that subclasses' destructors must not contain communication operations.
Reimplemented in Tpetra::MultiVector< Scalar, LO, GO, Node >.
Definition at line 208 of file Tpetra_DistObject_def.hpp.
|
protectedvirtual |
Whether the implementation's instance promises always to have a constant number of packets per LID, and if so, how many packets per LID there are.
If this method returns zero, the instance says that it might possibly have a different number of packets for each LID to send or receive. If it returns nonzero, the instance promises that the number of packets is the same for all LIDs, and that the return value is this number of packets per LID.
The default implementation of this method returns zero. This does not affect the behavior of doTransfer() in any way. If a nondefault implementation returns nonzero, doTransfer() will use this information to avoid unnecessary allocation and / or resizing of arrays.
Reimplemented in Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, and Tpetra::MultiVector< Scalar, LO, GO, Node >.
Definition at line 379 of file Tpetra_DistObject_def.hpp.
|
protectedvirtual |
Redistribute data across memory images.
src | [in] The source object, to redistribute into the target object, which is *this object. |
CM | [in] The combine mode that describes how to combine values that map to the same global ID on the same process. |
permuteToLIDs | [in] See copyAndPermute(). |
permuteFromLIDs | [in] See copyAndPermute(). |
remoteLIDs | [in] List of entries (as local IDs) in the destination object to receive from other processes. |
exportLIDs | [in] See packAndPrepare(). |
distor | [in/out] The Distributor object that knows how to redistribute data. |
revOp | [in] Whether to do a forward or reverse mode redistribution. |
Definition at line 386 of file Tpetra_DistObject_def.hpp.
|
protectedpure virtual |
Compare the source and target (this) objects for compatibility.
Implemented in Tpetra::CrsMatrix< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::MultiVector< Scalar, LO, GO, Node >, Tpetra::CrsGraph< LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::CrsGraph< LO, GO, node_type >, Tpetra::Experimental::BlockCrsMatrix< Scalar, LO, GO, Node >, and Tpetra::Experimental::BlockMultiVector< Scalar, LO, GO, Node >.
|
inlineprotectedvirtual |
Whether lass (???) implements old or new interface.
Reimplemented in Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, and Tpetra::MultiVector< Scalar, LO, GO, Node >.
Definition at line 545 of file Tpetra_DistObject_decl.hpp.
|
inlineprotectedvirtual |
Perform copies and permutations that are local to this process.
source | [in] On entry, the source object, from which we are distributing. We distribute to the destination object, which is *this object. |
numSameIDs | [in] The umber of elements that are the same on the source and destination (this) objects. These elements are owned by the same process in both the source and destination objects. No permutation occurs. |
numPermuteIDs | [in] The number of elements that are locally permuted between the source and destination objects. |
permuteToLIDs | [in] List of the elements that are permuted. They are listed by their LID in the destination object. |
permuteFromLIDs | [in] List of the elements that are permuted. They are listed by their LID in the source object. |
Reimplemented in Tpetra::CrsMatrix< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::CrsGraph< LocalOrdinal, GlobalOrdinal, Node, classic >, Tpetra::CrsGraph< LO, GO, node_type >, Tpetra::Experimental::BlockCrsMatrix< Scalar, LO, GO, Node >, and Tpetra::Experimental::BlockMultiVector< Scalar, LO, GO, Node >.
Definition at line 565 of file Tpetra_DistObject_decl.hpp.
|
inlineprotectedvirtual |
Perform any packing or preparation required for communication.
source | [in] Source object for the redistribution. |
exportLIDs | [in] List of the entries (as local IDs in the source object) we will be sending to other images. |
exports | [out] On exit, the buffer for data to send. |
numPacketsPerLID | [out] On exit, the implementation of this method must do one of two things: set numPacketsPerLID[i] to contain the number of packets to be exported for exportLIDs[i] and set constantNumPackets to zero, or set constantNumPackets to a nonzero value. If the latter, the implementation need not fill numPacketsPerLID. |
constantNumPackets | [out] On exit, 0 if numPacketsPerLID has variable contents (different size for each LID). If nonzero, then it is expected that the number of packets per LID is constant, and that constantNumPackets is that value. |
distor | [in] The Distributor object we are using. |
Reimplemented in Tpetra::Experimental::BlockMultiVector< Scalar, LO, GO, Node >.
Definition at line 600 of file Tpetra_DistObject_decl.hpp.
|
inlineprotectedvirtual |
Perform any unpacking and combining after communication.
importLIDs | [in] List of the entries (as LIDs in the destination object) we received from other images. |
imports | [in] Buffer containing data we received. |
numPacketsPerLID | [in] If constantNumPackets is zero, then numPacketsPerLID[i] contains the number of packets imported for importLIDs[i]. |
constantNumPackets | [in] If nonzero, then numPacketsPerLID is constant (same value in all entries) and constantNumPackets is that value. If zero, then numPacketsPerLID[i] is the number of packets imported for importLIDs[i]. |
distor | [in] The Distributor object we are using. |
CM | [in] The combine mode to use when combining the imported entries with existing entries. |
Reimplemented in Tpetra::Experimental::BlockMultiVector< Scalar, LO, GO, Node >.
Definition at line 638 of file Tpetra_DistObject_decl.hpp.
|
protectedvirtual |
Hook for creating a const view.
doTransfer() calls this on the source object. By default, it does nothing, but the source object can use this as a hint to fetch data from a compute buffer on an off-CPU device (such as a GPU) into host memory.
Reimplemented in Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, and Tpetra::MultiVector< Scalar, LO, GO, Node >.
Definition at line 933 of file Tpetra_DistObject_def.hpp.
|
protectedvirtual |
Hook for creating a nonconst view.
doTransfer() calls this on the destination (*this
) object. By default, it does nothing, but the destination object can use this as a hint to fetch data from a compute buffer on an off-CPU device (such as a GPU) into host memory.
rwo | [in] Whether to create a write-only or a read-and-write view. For Kokkos Node types where compute buffers live in a separate memory space (e.g., in the device memory of a discrete accelerator like a GPU), a write-only view only requires copying from host memory to the compute buffer, whereas a read-and-write view requires copying both ways (once to read, from the compute buffer to host memory, and once to write, back to the compute buffer). |
Reimplemented in Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, and Tpetra::MultiVector< Scalar, LO, GO, Node >.
Definition at line 939 of file Tpetra_DistObject_def.hpp.
|
protectedvirtual |
Hook for releasing views.
doTransfer() calls this on both the source and destination objects, once it no longer needs to access that object's data. By default, this method does nothing. Implementations may use this as a hint to free host memory which is a view of a compute buffer, once the host memory view is no longer needed. Some implementations may prefer to mirror compute buffers in host memory; for these implementations, releaseViews() may do nothing.
Reimplemented in Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node, classic >, and Tpetra::MultiVector< Scalar, LO, GO, Node >.
Definition at line 945 of file Tpetra_DistObject_def.hpp.
|
protected |
The Map over which this object is distributed.
Definition at line 695 of file Tpetra_DistObject_decl.hpp.