We released LeanDojo: tools, models, and benchmarks for machine learning to prove theorems in Lean.
Research
My research focuses on Neurosymbolic AI, which aims to make machine learning capable of symbolic reasoning. I have approached the goal from two angles: (1) applying machine learning to symbolic reasoning tasks, such as mathematical reasoning and theorem proving in formal logic or natural language; (2) introducing symbolic components into machine learning models to make them more interpretable, verifiable, and data-efficient.
Currently, I'm working on AI that can understand and reason about mathematics. Mathematical reasoning is a critical milestone toward human-level intelligence, and it can potentially transform many important problems in science and engineering, such as solving PDEs and formal verification.
Towards Large Language Models as Copilots for Theorem Proving in Lean Peiyang Song, Kaiyu Yang†, and Anima Anandkumar† († Equal advising)
NeurIPS MATH-AI Workshop, 2023
code
We introduce a framework for running neural network inference directly in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users, including tools for suggesting proof steps and completing intermediate proof goals using LLMs.
Can LLMs generate mathematical proofs that can be rigorously checked? We release LeanDojo: an open-source playground consisting of toolkits, benchmarks, and models for LLMs to prove formal theorems in the Lean proof assistant.
Data drives progress in computer vision. We introduce Infinigen: a generator of unlimited high-quality 3D data. 100% procedural, no external assets, no AI. Free and open source.
We propose MetaQNL, a symbolic "Quasi-Natural" language that can express both formal logic and natural language. Instead of manually constructing MetaQNL rules, we propose MetaInduce: an algorithm for learning rules from data.
We introduce NLProofS (Natural Language Proof Search) for multi-step logical reasoning in natural language. Given a hypothesis and a set of supporting facts, it generates a proof tree indicating how to derive the hypothesis from supporting facts.
We annotate human faces in ImageNet and obfuscate them for privacy protection. We show that face obfuscation does not hurt image classification and transfer learning.
We propose Minimally Contrastive Data Collection: a novel crowdsourcing method for reducing dataset bias. And we use it to construct Rel3D—the first large-scale, human-annotated dataset for grounding spatial relations in 3D.
We reveal and mitigate fairness issues of ImageNet, filtering its concept vocabulary and balancing its representation of various demographic groups in images.
We introduce CoqGym, one of the first and largest datasets for theorem proving in proof assistants, and ASTactic, a deep learning prover generating tactics as programs.
We propose Adversarial Crowdsourcing to reduce dataset bias and use it to construct SpatialSense, a challenging dataset for recognizing spatial relations in images.