Neighborhood Attention Extension

Bringing attention to a neighborhood near you!

NATTEN is a PyTorch extension implementing Neighborhood Attention (local attention) and Dilated Neighborhood Attention (sparse global attention, a.k.a. dilated local attention) as PyTorch modules and ops for 1D, 2D, and 3D data.

Start using our new Fused Neighborhood Attention implementation today!

GitHub / PyPI

Neighborhood Attention Transformers

Install with pip

Latest release: 0.17.5

Please select your preferred PyTorch version with the correct CUDA build, or CPU build if you're not using CUDA:

Run this command:

pip3 install natten==0.17.5+torch260cu126 -f https://shi-labs.com/natten/wheels/

Run this command:

pip3 install natten==0.17.5+torch260cu124 -f https://shi-labs.com/natten/wheels/

Run this command:

pip3 install natten==0.17.5+torch260cpu -f https://shi-labs.com/natten/wheels

Run this command:

pip3 install natten==0.17.5+torch250cu124 -f https://shi-labs.com/natten/wheels/

Run this command:

pip3 install natten==0.17.5+torch250cu121 -f https://shi-labs.com/natten/wheels/

Run this command:

pip3 install natten==0.17.5+torch250cpu -f https://shi-labs.com/natten/wheels

Your build isn't listed? Mac user? Just do:

pip install natten==0.17.5

Please note that without pre-compiled wheels, installing might take a while, as it will involve compiling NATTEN kernels on your device. In that case, compiler tools are required as well.

If you're using an NVIDIA GPU, you're also required to have CUDA Toolkit > 11.7, cmake > 3.20 and PyTorch > 2.0 installed before attempting to install/build NATTEN.

NATTEN does not have pre-compiled wheels for Windows, but you can try building from source.

For more information, please refer to our docs.

CUDA help

Don't know your torch/cuda version? Run this:

python3 -c "import torch; print(torch.__version__)"

Note: the CUDA version above refers to the version of the compiler which compiled your torch build, not the actual version of CUDA toolkit you may have installed locally.

Run this to check if you have the CUDA compiler:

which nvcc

and if you do, run this to check the version:

nvcc --version

If you don't have the CUDA toolkit, and just want to know which torch and NATTEN build is best for you, check your driver version with:

nvidia-smi

Once you know your driver version, match it with the corresponding CUDA toolkit version.

Quick links

Learn more about neighborhood attention

Citation

Please consider citing our most recent work on NATTEN:

@inproceedings{hassani2024faster,
  title        = {Faster Neighborhood Attention: Reducing the O(n^2) Cost of Self Attention at the Threadblock Level},
  author       = {Ali Hassani and Wen-Mei Hwu and Humphrey Shi},
  year         = 2024,
  booktitle    = {Advances in Neural Information Processing Systems},
}

and the original Neighborhood Attention Transformer papers:

@inproceedings{hassani2023neighborhood,
  title        = {Neighborhood Attention Transformer},
  author       = {Ali Hassani and Steven Walton and Jiachen Li and Shen Li and Humphrey Shi},
  year         = 2023,
  booktitle    = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}
}
@misc{hassani2022dilated,
  title        = {Dilated Neighborhood Attention Transformer},
  author       = {Ali Hassani and Humphrey Shi},
  year         = 2022,
  url          = {https://arxiv.org/abs/2209.15001},
  eprint       = {2209.15001},
  archiveprefix = {arXiv},
  primaryclass = {cs.CV}
}