Transformers

Attention blocks


source

BasisSelfAttnBlock

 BasisSelfAttnBlock (ch, num_heads, dropout=0)

A self attention block, i.e. a transformer encoder.


source

BasisCrossAttnBlock

 BasisCrossAttnBlock (ch, cond_emb_size, num_heads, dropout=0.0)

A cross attention block, i.e. a transformer decoder.

Spatial residual transformers


source

SpatialTransformerSelfAttn

 SpatialTransformerSelfAttn (ch, num_heads, depth, dropout=0.0)

A spatial residual transformer, only uses self-attention.


source

SpatialTransformer

 SpatialTransformer (ch, cond_emb_size, num_heads, depth, dropout=0.0)

A spatial residual transformer, uses self- and cross-attention on conditional input.

Back to top