Inference mode complains about inplace at torch.mean call, but I don't use inplace · Issue #70177 · pytorch/pytorch · GitHub
![Performance of `torch.compile` is significantly slowed down under `torch.inference_mode` - torch.compile - PyTorch Forums Performance of `torch.compile` is significantly slowed down under `torch.inference_mode` - torch.compile - PyTorch Forums](https://discuss.pytorch.org/uploads/default/original/3X/d/6/d65819241a215e5606721d6179a38d960e0ef159.png)
Performance of `torch.compile` is significantly slowed down under `torch.inference_mode` - torch.compile - PyTorch Forums
![TorchDynamo Update: 1.48x geomean speedup on TorchBench CPU Inference - compiler - PyTorch Dev Discussions TorchDynamo Update: 1.48x geomean speedup on TorchBench CPU Inference - compiler - PyTorch Dev Discussions](https://global.discourse-cdn.com/standard10/uploads/pytorch1/original/1X/1943bdcc2a52bb6016a5568bdbed8a223203d869.png)
TorchDynamo Update: 1.48x geomean speedup on TorchBench CPU Inference - compiler - PyTorch Dev Discussions
Inference mode throws RuntimeError for `torch.repeat_interleave()` for big tensors · Issue #75595 · pytorch/pytorch · GitHub
![TorchServe: Increasing inference speed while improving efficiency - deployment - PyTorch Dev Discussions TorchServe: Increasing inference speed while improving efficiency - deployment - PyTorch Dev Discussions](https://global.discourse-cdn.com/standard10/uploads/pytorch1/original/2X/2/209c033d4dfe32debf73a6d462c5537c87976137.png)
TorchServe: Increasing inference speed while improving efficiency - deployment - PyTorch Dev Discussions
![TorchServe: Increasing inference speed while improving efficiency - deployment - PyTorch Dev Discussions TorchServe: Increasing inference speed while improving efficiency - deployment - PyTorch Dev Discussions](https://global.discourse-cdn.com/standard10/uploads/pytorch1/original/2X/0/0c2ce27b800a356c166df89b66fc26702ad45faf.png)