Add conditional replacement of @torch.inference_mode for inference on AMD DirectML GPUs
#3295
+119
−6