genQC logo genQC
  • Overview
  • Get Started
  • Tutorials
  • API Reference
  • Research
  • Code Repository
  1. Inference
  2. Sampling functions

API Reference

  • Modules Overview
  • Release notes

  • Benchmark
    • Compilation benchmark
  • Dataset
    • Dataset balancing
    • Cached dataset
    • Quantum circuit dataset
    • Config dataset
    • Dataset helper functions
    • Mixed cached dataset
  • Inference
    • Evaluation metrics
    • Evaluation helper
    • Sampling functions
  • Models
    • Config model
    • Frozen OpenCLIP
    • Layers
    • Position encodings
    • Conditional qc-UNet
    • Encoder for unitaries
    • Clip
      • Frozen OpenCLIP
      • Unitary CLIP
    • Embedding
      • Base embedder
      • Rotational preset embedder
    • Transformers
      • Transformers and attention
      • CirDiT - Circuit Diffusion Transformer
      • Transformers
  • Pipeline
    • Callbacks
    • Compilation Diffusion Pipeline
    • Diffusion Pipeline
    • Diffusion Pipeline Special
    • Metrics
    • Multimodal Diffusion Pipeline
    • Pipeline
    • Unitary CLIP Pipeline
  • Platform
    • Circuits dataset generation functions
    • Circuits instructions
    • Simulation backend
    • Backends
      • Base backend
      • CUDA-Q circuits backend
      • Pennylane circuits backend
      • Qiskit circuits backend
    • Tokenizer
      • Base tokenizer
      • Circuits tokenizer
      • Tensor tokenizer
  • Scheduler
    • Scheduler
    • DDIM Scheduler
    • DDPM Scheduler
    • DPM Scheduler
  • Utils
    • Async functions
    • Config loader
    • Math and algorithms
    • Miscellaneous util

On this page

  • Generation
    • get_batch_samples
    • batched_sampling
    • prepare_prompts
    • Task specific sampling
    • generate_tensors
    • generate_compilation_tensors
  • Convertion
    • decode_tensors_to_backend
  • Report an issue
  • View source
  1. Inference
  2. Sampling functions

Sampling functions

Sampling functions for model inference.

Generation


source

get_batch_samples

 get_batch_samples (samples:int, auto_batch_size:int=512)

source

batched_sampling

 batched_sampling (pipeline:genQC.pipeline.pipeline.Pipeline,
                   cond_kwargs:dict[str,torch.Tensor], samples:int,
                   system_size:int, num_of_qubits:int, max_gates:int,
                   g:float=1.0, init_latents:Optional[torch.Tensor]=None,
                   no_bar:bool=True, unique:bool=False,
                   auto_batch_size:int=512, enable_params:bool=True,
                   reduce_spatial:bool=True,
                   return_predicted_x0:bool=False)

e.g. cond_kwargs.keys = {“c”, “micro_cond”, “negative_c”, “U”}


source

prepare_prompts

 prepare_prompts (pipeline:genQC.pipeline.pipeline.Pipeline,
                  prompt:Union[str,Sequence[str]],
                  negative_prompt:Union[str,Sequence[str],NoneType]=None)

Task specific sampling


source

generate_tensors

 generate_tensors (pipeline:genQC.pipeline.pipeline.Pipeline,
                   prompt:Union[str,Sequence[str]], samples:int,
                   system_size:int, num_of_qubits:int, max_gates:int,
                   g:float=1.0, init_latents:Optional[torch.Tensor]=None,
                   no_bar:bool=True, unique:bool=False,
                   auto_batch_size:int=512, enable_params:bool=False,
                   reduce_spatial:bool=True,
                   return_predicted_x0:bool=False,
                   negative_prompt:Union[str,Sequence[str],NoneType]=None,
                   micro_cond:Optional[torch.Tensor]=None)

source

generate_compilation_tensors

 generate_compilation_tensors (pipeline:genQC.pipeline.pipeline.Pipeline,
                               prompt:Union[str,Sequence[str]],
                               U:torch.Tensor, samples:int,
                               system_size:int, num_of_qubits:int,
                               max_gates:int, g:float=1.0,
                               tensor_prod_pad:bool=True,
                               init_latents:Optional[torch.Tensor]=None,
                               no_bar:bool=True, unique:bool=False,
                               auto_batch_size:int=512,
                               enable_params:bool=True,
                               reduce_spatial:bool=True,
                               return_predicted_x0:bool=False, negative_pr
                               ompt:Union[str,Sequence[str],NoneType]=None
                               , negative_u:Optional[torch.Tensor]=None,
                               micro_cond:Optional[torch.Tensor]=None)

*Samples tensor encodings from the DM for the given sample parameters.

What kind of unitary padding we have depends on what we used for model training, so it depends on the concrete model weights.*

Convertion


source

decode_tensors_to_backend

 decode_tensors_to_backend (simulator:genQC.platform.simulation.Simulator,
                            tokenizer:genQC.platform.tokenizer.base_tokeni
                            zer.BaseTokenizer, tensors:torch.Tensor,
                            params:Optional[torch.Tensor]=None,
                            silent:bool=True, n_jobs:int=1,
                            filter_errs:bool=True)
Back to top
Evaluation helper
Config model
 

Copyright 2025, Florian Fürrutter

  • Report an issue
  • View source