print("OpenCLIP version:", open_clip.__version__)
OpenCLIP version: 2.30.0
API Reference
FrozenOpenCLIPEmbedderConfig (arch:str, version:str, max_length:int, freeze:bool, layer:str)
FrozenOpenCLIPEmbedder (arch='ViT-B-32', version='datacomp_xl_s13b_b90k', max_length=77, freeze=True, layer='penultimate', **kwargs)
Loads and freezes the OpenCLIP transformer encoder for text prompts.
[INFO]: Cuda device has a capability of 8.9 (>= 8), allowing tf32 matmul.
tensor([[49406, 314, 272, 267, 273, 267, 273, 316, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[49406, 314, 272, 267, 273, 267, 320, 273, 316, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], device='cuda:0')
(torch.Size([2, 77, 512]),
tensor([[[-0.3819, -0.3694, -0.0712, ..., 0.0959, -0.0834, -0.0929],
[-0.2669, 0.1847, -0.5890, ..., 0.7211, -1.7483, 1.2858],
[-0.9821, -0.6650, 0.2107, ..., -0.4223, 0.5351, 0.8494],
...,
[-0.0300, 1.3871, 0.3989, ..., 0.2657, -0.1257, -1.3758],
[-0.0797, 1.4044, 0.3595, ..., 0.2328, -0.0766, -1.3314],
[ 0.1599, 1.5989, 0.2775, ..., 0.1202, -0.1294, -1.5480]],
[[-0.3819, -0.3694, -0.0712, ..., 0.0959, -0.0834, -0.0929],
[-1.2507, 1.4711, 0.7264, ..., 1.1489, -0.4983, 0.4494],
[-1.2645, -0.3412, 0.9422, ..., 0.1529, 0.0271, 0.4574],
...,
[-0.0694, 1.4021, 0.4687, ..., 0.2277, -0.0694, -1.3635],
[-0.1196, 1.4167, 0.4262, ..., 0.1955, -0.0225, -1.3245],
[ 0.1381, 1.6182, 0.3528, ..., 0.0775, -0.0853, -1.5246]]], device='cuda:0'))
'<start_of_text>2 , 2 , 2 <end_of_text>!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!'
Model takes now also (batched) scalar int values that are defined to unique conditions like \([1,2,2]=4\). If input is now such int the output is the cached pre-embedded tensor. If a non int, like a token string is passed we just do the normal embedding live.
CachedFrozenOpenCLIPEmbedderConfig (arch:str, version:str, max_length:int, freeze:bool, layer:str, enable_cache_token_limit:bool)
CachedFrozenOpenCLIPEmbedder (arch='ViT-B-32', version='datacomp_xl_s13b_b90k', max_length=77, freeze=True, layer='penultimate', enable_cache_token_limit:bool=True, **kwargs)
Adds caching support to FrozenOpenCLIPEmbedder
.
a = CachedFrozenOpenCLIPEmbedder(enable_cache_token_limit=True).to(device)
p = ["1,1,2", "2,2,2", "4,4,4", "6,4,7", "6,4,8", "6,4,9", "6,4,1"]
a.generate_cache(p)
[INFO]: - `generate_cache` infered a TOKEN limit of 7
[INFO]: caching trying to allocate memory (7, 77, 512) on cuda, approx. 0.001 GB
CachedFrozenOpenCLIPEmbedderConfig(arch='ViT-B-32', version='datacomp_xl_s13b_b90k', max_length=7, freeze=True, layer='penultimate', enable_cache_token_limit=True)
c_cached = torch.tensor([0, 0, 1], device=a.device)#.cpu()
c_uncached = a.tokenize_and_push_to_device(["1,1,2", "1,1,2", "2,2,2"])
enc_cached = a(c_cached)
enc_uncached = a(c_uncached)#.cpu()
enc_cached.shape, enc_uncached.shape, torch.allclose(enc_cached, enc_uncached, atol=1e-3)
(torch.Size([3, 7, 512]), torch.Size([3, 7, 512]), False)