Kaiyu Yang

I am a research scientist at Meta in the Fundamental AI Research (FAIR) team. I am based in New York and work on AI/LLMs for mathematical reasoning. Before joining Meta, I was a postdoctoral scholar at Caltech, working with Pietro Perona, Yisong Yue, and Swarat Chaudhuri. I received my Ph.D. from Princeton University, where I was advised by Jia Deng and also worked with Olga Russakovsky and Danqi Chen.

杨凯峪  /  kaiyuy [MASK] meta [MASK] com  /  CV  /  Bio  /  Google Scholar  /  GitHub

profile photo
News
  • My collaborator, Jiacheng Chen, is currently applying for Ph.D. programs. Recruit him if you're looking for a strong student in AI for mathematical reasoning and theorem proving! Feel free to contact me to discuss his qualifications.
  • We're looking for a postdoc to work on AI for formal mathematical reasoning. If you're interested, apply as soon as possible via this link and email me. We will close the search very soon.
Research

I work on AI for Mathematics, specifically focusing on:

Towards Large Language Models as Copilots for Theorem Proving in Lean
Peiyang Song, Kaiyu Yang, and Anima Anandkumar
In submission, 2024
arXiv / code / demo / talk

We introduce a framework for running neural network inference directly in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users, including tools for suggesting proof steps and completing intermediate proof goals using LLMs.

SciInstruct: a Self-Reflective Instruction Annotated Dataset for Training Scientific Language Models
Dan Zhang, Ziniu Hu, Sining Zhoubian, Zhengxiao Du, Kaiyu Yang, Zihan Wang, Yisong Yue, Yuxiao Dong, and Jie Tang
Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2024
arXiv / code

We curated SciInstruct, a diverse and high-quality dataset of college-level mathematics, physics, chemistry, and formal proofs. Using SciInstruct to finetune the ChatGLM family of LLMs, we introduce SciGLM, a suite of scientific language models for college-level mathematical/scientific reasoning.

A Survey on Deep Learning for Theorem Proving
Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, and Xujie Si
Conference on Language Modeling (COLM), 2024
arXiv / code

We present the first comprehensive survey of deep learning for theorem proving and autoformalization.

Autoformalizing Euclidean Geometry
Logan Murphy*, Kaiyu Yang*, Jialiang Sun, Zhaoyu Li, Anima Anandkumar, and Xujie Si (* equal contribution)
International Conference on Machine Learning (ICML), 2024
arXiv / code

We release LeanEuclid, a benchmark for testing autoformalization, consisting of Euclid's Elements (Book I) manually formalized in Lean. It is challenging for state-of-the-art LLMs like GPT-4V. Furthermore, the process of constructing LeanEuclid has uncovered intriguing ambiguities in Euclid's original works.

LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, and Anima Anandkumar
Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2023, Oral presentation
arXiv / project / code / talk / slides / media

Can LLMs generate mathematical proofs that can be rigorously checked? We release LeanDojo: an open-source playground consisting of toolkits, benchmarks, and models for LLMs to prove formal theorems in the Lean proof assistant.

Infinite Photorealistic Worlds using Procedural Generation
Alexander Raistrick*, Lahav Lipson*, Zeyu Ma*, Lingjie Mei, Mingzhe Wang, Yiming Zuo, Karhan Kayan, Hongyu Wen, Beining Han, Yihan Wang, Alejandro Newell, Hei Law, Ankit Goyal, Kaiyu Yang, and Jia Deng
Conference on Computer Vision and Pattern Recognition (CVPR), 2023
arXiv / project / code

Data drives progress in computer vision. We introduce Infinigen: a generator of unlimited high-quality 3D data. 100% procedural, no external assets, no AI. Free and open source.

Learning Symbolic Rules for Reasoning in Quasi-Natural Language
Kaiyu Yang and Jia Deng
Transactions on Machine Learning Research (TMLR), 2023
arXiv / code

We propose MetaQNL, a symbolic "Quasi-Natural" language that can express both formal logic and natural language. Instead of manually constructing MetaQNL rules, we propose MetaInduce: an algorithm for learning rules from data.

Generating Natural Language Proofs with Verifier-Guided Search
Kaiyu Yang, Jia Deng, and Danqi Chen
Empirical Methods in Natural Language Processing (EMNLP), 2022, Oral presentation
arXiv / code / slides / poster

We introduce NLProofS (Natural Language Proof Search) for multi-step logical reasoning in natural language. Given a hypothesis and a set of supporting facts, it generates a proof tree indicating how to derive the hypothesis from supporting facts.

A Study of Face Obfuscation in ImageNet
Kaiyu Yang, Jacqueline Yau, Li Fei-Fei, Jia Deng, and Olga Russakovsky
International Conference on Machine Learning (ICML), 2022
arXiv / code / slides / talk / poster / project / media

We annotate human faces in ImageNet and obfuscate them for privacy protection. We show that face obfuscation does not hurt image classification and transfer learning.

Strongly Incremental Constituency Parsing with Graph Neural Networks
Kaiyu Yang and Jia Deng
Neural Information Processing Systems (NeurIPS), 2020
arXiv / code / slides / talk / poster

We propose a novel transition-based constituency parser named Attach-Juxtapose, inspired by how humans perform parsing.

Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D
Ankit Goyal, Kaiyu Yang, Dawei Yang, and Jia Deng
Neural Information Processing Systems (NeurIPS), 2020, Spotlight presentation
arXiv / code

We propose Minimally Contrastive Data Collection: a novel crowdsourcing method for reducing dataset bias. And we use it to construct Rel3D—the first large-scale, human-annotated dataset for grounding spatial relations in 3D.

Filtering and Balancing the Distribution of the People Subtree in the ImageNet Hierarchy
Kaiyu Yang, Klint Qinami, Li Fei-Fei, Jia Deng, and Olga Russakovsky
Conference on Fairness, Accountability, and Transparency (FAT*), 2020
arXiv / slides / talk / blog / media

We reveal and mitigate fairness issues of ImageNet, filtering its concept vocabulary and balancing its representation of various demographic groups in images.

Learning to Prove Theorems via Interacting with Proof Assistants
Kaiyu Yang and Jia Deng
International Conference on Machine Learning (ICML), 2019
arXiv / code / slides / poster

We introduce CoqGym, one of the first and largest datasets for theorem proving in proof assistants, and ASTactic, a deep learning prover generating tactics as programs.

SpatialSense: An Adversarially Crowdsourced Benchmark for Spatial Relation Recognition
Kaiyu Yang, Olga Russakovsky, and Jia Deng
International Conference on Computer Vision (ICCV), 2019
arXiv / code / poster

We propose Adversarial Crowdsourcing to reduce dataset bias and use it to construct SpatialSense, a challenging dataset for recognizing spatial relations in images.

Stacked Hourglass Networks for Human Pose Estimation
Alejandro Newell, Kaiyu Yang, and Jia Deng
European Conference on Computer Vision (ECCV), 2016
arXiv / code

We introduce Stacked Hourglass Networks—one of the most popular architectures for human pose estimation, object detection, and more.

Workshops & Tutorials

I'm a co-organizer of the following events:

Media

My works are covered by:

Mentoring

I'm fortunate to have worked with many talented students:

  • Zhaoyu Li: Ph.D. student @ University of Toronto
  • Jiacheng Chen: Undergrad @ South China University of Technology
  • Peiyang Song: Undergrad @ UCSB -> Undergrad @ Caltech
  • Rahul Chalamala: Undergrad @ Caltech -> Researcher @ Together AI
  • Shixing Yu: Master's student @ UT Austin -> Ph.D. student @ Cornell
  • Gene Chou: Undergrad @ Princeton -> Ph.D. student @ Cornell
  • Jacqueline Yau: Master’s student @ Stanford -> Ph.D. student @ UIUC
Teaching

Website template credit: Jon Barron