Multiprocessing package - torch.multiprocessing¶
torch.multiprocessing is a wrapper around the native multiprocessing
module. It registers custom reducers, that use shared memory to provide shared
views on the same data in different processes. Once the tensor/storage is moved
to shared_memory (see share_memory_()
), it will be possible
to send it to other processes without making any copies.
The API is 100% compatible with the original module - it’s enough to change
import multiprocessing
to import torch.multiprocessing
to have all the
tensors sent through the queues or shared via other mechanisms, moved to shared
memory.
Because of the similarity of APIs we do not document most of this package contents, and we recommend referring to very good docs of the original module.
Warning
If the main process exits abruptly (e.g. because of an incoming signal),
Python’s multiprocessing
sometimes fails to clean up its children.
It’s a known caveat, so if you’re seeing any resource leaks after
interrupting the interpreter, it probably means that this has just happened
to you.
Strategy management¶
-
torch.multiprocessing.
get_all_sharing_strategies
()[source]¶ Returns a set of sharing strategies supported on a current system.
-
torch.multiprocessing.
get_sharing_strategy
()[source]¶ Returns the current strategy for sharing CPU tensors.
-
torch.multiprocessing.
set_sharing_strategy
(new_strategy)[source]¶ Sets the strategy for sharing CPU tensors.
- Parameters
new_strategy (str) – Name of the selected strategy. Should be one of the values returned by
get_all_sharing_strategies()
.