Development1 [PyTorch] How to hook '.to()' or '.cuda()' method when CPU and CUDA implementations of a module are different. When implement custom operation like upfirdn2d, the operation of the method maybe different between default and CUDA implementation. import torch import torch.nn as nn from torch import Tensor from typing import Callable class Foo(nn.Module): def __init__(self) -> None: super().__init__() self.default_operation = self._load_default_operation() # cuda operation may need to be compiled, so set the.. 2021. 10. 5. 이전 1 다음