Instructions to use h94/IP-Adapter-FaceID with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use h94/IP-Adapter-FaceID with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("h94/IP-Adapter-FaceID", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Is current usage only via code?
#13
by jndietz - opened
I tried to load this into the AUTOMATIC1111 web ui via the ControlNet extension, but got some errors. Can this be used like other IP Adapter models or can it only be used in code (per the README.md)?
2023-12-31 08:07:03,631 - ControlNet - INFO - Loaded state_dict from [E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\models\ip-adapter-faceid-plusv2_sd15.bin]
*** Error running process: E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "E:\github\stable-diffusion-webui\modules\scripts.py", line 718, in process
script.process(p, *script_args)
File "E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1053, in process
self.controlnet_hack(p)
File "E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1042, in controlnet_hack
self.controlnet_main_entry(p)
File "E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 758, in controlnet_main_entry
model_net = Script.load_control_model(p, unet, unit.model)
File "E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 364, in load_control_model
model_net = Script.build_control_model(p, unet, model)
File "E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 393, in build_control_model
network = build_model_by_guess(state_dict, unet, model_path)
File "E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_model_guess.py", line 244, in build_model_by_guess
network = PlugableIPAdapter(state_dict)
File "E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlmodel_ipadapter.py", line 334, in __init__
self.ipadapter = IPAdapterModel(state_dict,
File "E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlmodel_ipadapter.py", line 212, in __init__
self.load_ip_adapter(state_dict)
File "E:\github\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlmodel_ipadapter.py", line 215, in load_ip_adapter
self.image_proj_model.load_state_dict(state_dict["image_proj"])
File "E:\github\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for MLPProjModel:
Missing key(s) in state_dict: "proj.3.weight", "proj.3.bias".
Unexpected key(s) in state_dict: "norm.weight", "norm.bias", "perceiver_resampler.proj_in.weight", "perceiver_resampler.proj_in.bias", "perceiver_resampler.proj_out.weight", "perceiver_resampler.proj_out.bias", "perceiver_resampler.norm_out.weight", "perceiver_resampler.norm_out.bias", "perceiver_resampler.layers.0.0.norm1.weight", "perceiver_resampler.layers.0.0.norm1.bias", "perceiver_resampler.layers.0.0.norm2.weight", "perceiver_resampler.layers.0.0.norm2.bias", "perceiver_resampler.layers.0.0.to_q.weight", "perceiver_resampler.layers.0.0.to_kv.weight", "perceiver_resampler.layers.0.0.to_out.weight", "perceiver_resampler.layers.0.1.0.weight", "perceiver_resampler.layers.0.1.0.bias", "perceiver_resampler.layers.0.1.1.weight", "perceiver_resampler.layers.0.1.3.weight", "perceiver_resampler.layers.1.0.norm1.weight", "perceiver_resampler.layers.1.0.norm1.bias", "perceiver_resampler.layers.1.0.norm2.weight", "perceiver_resampler.layers.1.0.norm2.bias", "perceiver_resampler.layers.1.0.to_q.weight", "perceiver_resampler.layers.1.0.to_kv.weight", "perceiver_resampler.layers.1.0.to_out.weight", "perceiver_resampler.layers.1.1.0.weight", "perceiver_resampler.layers.1.1.0.bias", "perceiver_resampler.layers.1.1.1.weight", "perceiver_resampler.layers.1.1.3.weight", "perceiver_resampler.layers.2.0.norm1.weight", "perceiver_resampler.layers.2.0.norm1.bias", "perceiver_resampler.layers.2.0.norm2.weight", "perceiver_resampler.layers.2.0.norm2.bias", "perceiver_resampler.layers.2.0.to_q.weight", "perceiver_resampler.layers.2.0.to_kv.weight", "perceiver_resampler.layers.2.0.to_out.weight", "perceiver_resampler.layers.2.1.0.weight", "perceiver_resampler.layers.2.1.0.bias", "perceiver_resampler.layers.2.1.1.weight", "perceiver_resampler.layers.2.1.3.weight", "perceiver_resampler.layers.3.0.norm1.weight", "perceiver_resampler.layers.3.0.norm1.bias", "perceiver_resampler.layers.3.0.norm2.weight", "perceiver_resampler.layers.3.0.norm2.bias", "perceiver_resampler.layers.3.0.to_q.weight", "perceiver_resampler.layers.3.0.to_kv.weight", "perceiver_resampler.layers.3.0.to_out.weight", "perceiver_resampler.layers.3.1.0.weight", "perceiver_resampler.layers.3.1.0.bias", "perceiver_resampler.layers.3.1.1.weight", "perceiver_resampler.layers.3.1.3.weight".
size mismatch for proj.0.weight: copying a param with shape torch.Size([1024, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for proj.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for proj.2.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([768, 512]).
size mismatch for proj.2.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([768]).