LangFlow

LangFlow is a continuous diffusion language model that operates in embedding space. Unlike discrete diffusion models (MDLM, SEDD, DUO), LangFlow performs diffusion directly on continuous token embeddings, enabling smoother denoising dynamics.

Using LangFlow

To use the pre-trained model for text generation, use the following snippet:

from transformers import AutoModelForMaskedLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('gpt2')
model = AutoModelForMaskedLM.from_pretrained('chumengl/langflow-owt', trust_remote_code=True)

# Generate samples
samples = model.generate_samples(num_samples=5, num_steps=128)
texts = tokenizer.batch_decode(samples, skip_special_tokens=True)
for text in texts:
    print(text)

Model Details

  • Architecture: DiT (Diffusion Transformer) backbone with adaptive layer normalization
  • Context Length: 1024 tokens
  • Parameters: ~130M non-embedding parameters (similar to GPT-2 medium)
  • Training: 1M steps on OpenWebText corpus
  • Tokenizer: GPT-2 tokenizer (50,257 vocab size)

Model Card Contact

Chumeng Liang (chumengl@illinois.edu)

Downloads last month
75
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train Continuous-Rivals-Discrete/langflow-owt