Transformers
Attention blocks
BasisSelfAttnBlock
BasisSelfAttnBlock (ch, num_heads, dropout=0)
A self attention block, i.e. a transformer
encoder.
BasisCrossAttnBlock
BasisCrossAttnBlock (ch, cond_emb_size, num_heads, dropout=0.0)
A cross attention block, i.e. a transformer
decoder.
Spatial residual transformers
SpatialTransformerSelfAttn
SpatialTransformerSelfAttn (ch, num_heads, depth, dropout=0.0)
A spatial residual transformer
, only uses self-attention.
SpatialTransformer
SpatialTransformer (ch, cond_emb_size, num_heads, depth, dropout=0.0)
A spatial residual transformer
, uses self- and cross-attention on conditional input.