FLARE: Fast Low-rank Attention Routing Engine
Scaling self-attention with fixed-length latent sequences
Some thoughts from the team: Keaton, Arya, Roth
Scaling self-attention with fixed-length latent sequences
How our model uses chamfers, fillets, etc.
A general list of our working problems
Showcasing how reasoning models can unlock a whole new level of design
5 months from 0 lines of code to SOTA text-to-CAD and $4.8M in funding