Instructions to use h94/IP-Adapter-FaceID with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use h94/IP-Adapter-FaceID with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("h94/IP-Adapter-FaceID", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Update `image_encoder_path` to a public CLIP one
#15
by multimodalart HF Staff - opened
Hi there, I'm proposing this PR to correct the IP Adapter Plus demo.
However, I'm not sure that is the correct CLIP model, as the correct CLIP model is not referred to anywhere
Happy to update the PR if openai/clip-vit-large-patch14 should be the correct model (to match the model used to train SD1.5)
Looks correct to me, it's pointing to the CLIP Vision model on the main IPAdapter Repo here - https://huggingface.co/h94/IP-Adapter/tree/main/models/image_encoder
yes, thanks a lot
h94 changed pull request status to merged