Sampling functions
Sampling functions for model inference.
Generation
get_batch_samples
get_batch_samples (samples:int, auto_batch_size:int=512)
batched_sampling
batched_sampling (pipeline:genQC.pipeline.pipeline.Pipeline, cond_kwargs:dict[str,torch.Tensor], samples:int, system_size:int, num_of_qubits:int, max_gates:int, g:float=1.0, init_latents:Optional[torch.Tensor]=None, no_bar:bool=True, unique:bool=False, auto_batch_size:int=512, enable_params:bool=True, reduce_spatial:bool=True, return_predicted_x0:bool=False)
e.g. cond_kwargs.keys = {“c”, “micro_cond”, “negative_c”, “U”}
prepare_prompts
prepare_prompts (pipeline:genQC.pipeline.pipeline.Pipeline, prompt:Union[str,Sequence[str]], negative_prompt:Union[str,Sequence[str],NoneType]=None)
Task specific sampling
generate_tensors
generate_tensors (pipeline:genQC.pipeline.pipeline.Pipeline, prompt:Union[str,Sequence[str]], samples:int, system_size:int, num_of_qubits:int, max_gates:int, g:float=1.0, init_latents:Optional[torch.Tensor]=None, no_bar:bool=True, unique:bool=False, auto_batch_size:int=512, enable_params:bool=False, reduce_spatial:bool=True, return_predicted_x0:bool=False, negative_prompt:Union[str,Sequence[str],NoneType]=None, micro_cond:Optional[torch.Tensor]=None)
generate_compilation_tensors
generate_compilation_tensors (pipeline:genQC.pipeline.pipeline.Pipeline, prompt:Union[str,Sequence[str]], U:torch.Tensor, samples:int, system_size:int, num_of_qubits:int, max_gates:int, g:float=1.0, tensor_prod_pad:bool=True, init_latents:Optional[torch.Tensor]=None, no_bar:bool=True, unique:bool=False, auto_batch_size:int=512, enable_params:bool=True, reduce_spatial:bool=True, return_predicted_x0:bool=False, negative_pr ompt:Union[str,Sequence[str],NoneType]=None , negative_u:Optional[torch.Tensor]=None, micro_cond:Optional[torch.Tensor]=None)
*Samples tensor encodings from the DM for the given sample parameters.
What kind of unitary padding we have depends on what we used for model training, so it depends on the concrete model weights.*
Convertion
decode_tensors_to_backend
decode_tensors_to_backend (simulator:genQC.platform.simulation.Simulator, tokenizer:genQC.platform.tokenizer.base_tokeni zer.BaseTokenizer, tensors:torch.Tensor, params:Optional[torch.Tensor]=None, silent:bool=True, n_jobs:int=1, filter_errs:bool=True)