genQC logo genQC
  • Overview
  • Get Started
  • Tutorials
  • API Reference
  • Research
  • Code Repository
  1. Models
  2. Clip
  3. Unitary CLIP

API Reference

  • Modules Overview
  • Release notes

  • Benchmark
    • Compilation benchmark
  • Dataset
    • Dataset balancing
    • Cached dataset
    • Quantum circuit dataset
    • Config dataset
    • Dataset helper functions
    • Mixed cached dataset
  • Inference
    • Evaluation metrics
    • Evaluation helper
    • Sampling functions
  • Models
    • Config model
    • Frozen OpenCLIP
    • Layers
    • Position encodings
    • Conditional qc-UNet
    • Encoder for unitaries
    • Clip
      • Frozen OpenCLIP
      • Unitary CLIP
    • Embedding
      • Base embedder
      • Rotational preset embedder
    • Transformers
      • Transformers and attention
      • CirDiT - Circuit Diffusion Transformer
      • Transformers
  • Pipeline
    • Callbacks
    • Compilation Diffusion Pipeline
    • Diffusion Pipeline
    • Diffusion Pipeline Special
    • Metrics
    • Multimodal Diffusion Pipeline
    • Pipeline
    • Unitary CLIP Pipeline
  • Platform
    • Circuits dataset generation functions
    • Circuits instructions
    • Simulation backend
    • Backends
      • Base backend
      • CUDA-Q circuits backend
      • Pennylane circuits backend
      • Qiskit circuits backend
    • Tokenizer
      • Base tokenizer
      • Circuits tokenizer
      • Tensor tokenizer
  • Scheduler
    • Scheduler
    • DDIM Scheduler
    • DDPM Scheduler
    • DPM Scheduler
  • Utils
    • Async functions
    • Config loader
    • Math and algorithms
    • Miscellaneous util

On this page

  • Layers
    • RotaryMultiheadAttention
    • FeedForwardBlock
  • Unitary-text encoder
    • UnitaryEncoderAttnBlock
    • UnitaryTextEncoderConfig
    • UnitaryTextEncoder
  • Circuit encoder
    • SelfAttnBlock
    • PackingTransformer
    • CoreTransformer
    • CircuitEncoderConfig
    • CircuitEncoder
  • Unitary CLIP model
    • UnitaryCLIPConfig
    • UnitaryCLIP
  • Report an issue
  • View source
  1. Models
  2. Clip
  3. Unitary CLIP

Unitary CLIP

Contrastive pre-training of an unitary encoder

Layers


source

RotaryMultiheadAttention

 RotaryMultiheadAttention (in_dim:int, embed_dim:int, num_heads:int,
                           bias:bool=True, p_rope:float=1.0,
                           max_seq_len:int=4096, base_rope:float=10000,
                           enable_qk_norm:bool=False)

*MultiheadAttention described in the paper: Attention Is All You Need (https://arxiv.org/abs/1706.03762). We add a rotary position encoding (RoPE).

The attention core is F.scaled_dot_attention from pytorch. Could be switched to https://github.com/Dao-AILab/flash-attention or xFormers.*


source

FeedForwardBlock

 FeedForwardBlock (in_dim:int, hidden_dim:int, dropout:float=0.0)

A small dense feed-forward network as used in transformers. Assumes channel last. Inspired by https://arxiv.org/pdf/2401.11605 and added from https://arxiv.org/pdf/2002.05202 a modification to SiGLU structure.

Unitary-text encoder


source

UnitaryEncoderAttnBlock

 UnitaryEncoderAttnBlock (ch:int, y_emb_size:int, num_heads:int,
                          dropout:float=0.0, p_rope:float=1.0,
                          base_rope:float=10000)

A self-attention block with 2d-RoPE.


source

UnitaryTextEncoderConfig

 UnitaryTextEncoderConfig (text_embed_ch:int, text_encoding_ch:int,
                           text_attn_num_heads:int, text_attn_depth:int,
                           unitary_encoding_ch:int,
                           unitary_downscale_factor:int,
                           main_num_heads:int, main_depth:int,
                           use_rope:bool, p_rope:float, base_rope:float,
                           dropout:float)

source

UnitaryTextEncoder

 UnitaryTextEncoder (text_embed_ch:int, text_encoding_ch:int,
                     text_attn_num_heads:int, text_attn_depth:int,
                     unitary_encoding_ch:int,
                     unitary_downscale_factor:int, main_num_heads:int,
                     main_depth:int, use_rope:bool, p_rope:float,
                     base_rope:float, dropout:float)

A basic nn.Module with IO functionality.

Circuit encoder


source

SelfAttnBlock

 SelfAttnBlock (ch:int, num_heads:int, dropout:float=0.0,
                p_rope:float=1.0, base_rope:float=10000)

A self-attention block with RoPE.


source

PackingTransformer

 PackingTransformer (ch:int, depth:int, num_heads:int, dropout:float=0.0,
                     p_rope:float=1.0, base_rope:float=10000)

The first stage packing/unpacking transformers of the CirDiT model. Applies a RoPE for time dimension only, not on spatial dimension.


source

CoreTransformer

 CoreTransformer (ch:int, depth:int, num_heads:int, dropout:float=0.0,
                  p_rope:float=1.0, base_rope:float=10000)

The main transformer of the CirDiT model.
Applies a RoPE for time dimension.


source

CircuitEncoderConfig

 CircuitEncoderConfig (embedder_config:dict, ch_packing:int, ch_core:int,
                       depth_packing:int, depth_core:int,
                       num_heads_packing:int, num_heads_core:int,
                       dropout:float, p_rope:float, base_rope:float)

source

CircuitEncoder

 CircuitEncoder (embedder_config:Optional[dict], ch_packing:int,
                 ch_core:int, depth_packing:int, depth_core:int,
                 num_heads_packing:int, num_heads_core:int,
                 dropout:float=0.0, p_rope:float=1.0,
                 base_rope:float=10000,
                 embedder:Optional[torch.nn.modules.module.Module]=None)

A basic nn.Module with IO functionality.

Unitary CLIP model


source

UnitaryCLIPConfig

 UnitaryCLIPConfig (text_encoder_config:dict, clip_embed_size:int)

source

UnitaryCLIP

 UnitaryCLIP (text_encoder_config:Optional[dict],
              unitary_text_encoder:__main__.UnitaryTextEncoder,
              circuit_encoder:__main__.CircuitEncoder,
              clip_embed_size:int,
              text_encoder:Optional[torch.nn.modules.module.Module]=None)

A basic nn.Module with IO functionality.

Back to top
Frozen OpenCLIP
Base embedder
 

Copyright 2025, Florian Fürrutter

  • Report an issue
  • View source