Tom Silver

Assistant Professor, Princeton University

tomsilver3-high-res.jpg

Contact: tsilver@princeton.edu

I am an assistant professor at Princeton in the Electrical and Computer Engineering department and a core faculty member in robotics. I am also associated with the Center for Statistics and Machine Learning. I completed my PhD in May 2024 at MIT EECS, where I was advised by Leslie Kaelbling and Josh Tenenbaum. I was a postdoc with Tapo Bhattacharjee at Cornell in 2024-2025. I received my B.A. from Harvard with highest honors in computer science and mathematics in 2016.

I direct the Princeton Robot Planning and Learning (PRPL) lab. Our mission is to develop generalist robots that learn and plan to help people. Most of our work is at the intersection of automated planning and machine learning: learning to plan and planning to learn while making efficient use of limited data and time. We often use techniques from task and motion planning, program synthesis, foundation models, reinforcement learning, and neuro-symbolic ML. But we are driven by problems, not methods, and we are especially motivated to solve general problems in robot caregiving.

news

Jun 24, 2025 FEAST won the Best Paper Award at RSS 2025!
May 23, 2025 New preprint: “Coloring Between the Lines: Personalization in the Null Space of Planning Constraints” (arxiv, website).
May 20, 2025 Heading to ICRA 2025 to help organize the PhyRC Challenge!
Apr 22, 2025 Two papers at RSS 2025: Bilevel Learning for Bilevel Planning (led by Bowen Li) and FEAST: A Flexible Mealtime Assistance System Towards In-the-Wild Personalization (led by Rajat Kumar Jenamani). See you in LA!
Mar 01, 2025 Two papers at HRI 2025: GRACE: Generalizing Robot-Assisted Caregiving with User Functionality Embeddings (led by Ziang Liu) and CART-MPC: Coordinating Assistive Devices for Robot-Assisted Transferring with Multi-Agent Model Predictive Control (led by Ruolin Ye).

research areas


Learning abstractions for robot planning

Abstractions allow robots to first focus on the high-level aspects of a task before getting bogged down in details. We would like a robot to automatically learn abstractions—state abstractions (predicates) and action abstractions (skills)—that are specialized for planning in its domain. We are especially interested in abstractions for task and motion planning. (Image credit: Nishanth Kumar)

Program synthesis for planning

We want robots to be like self-supervised software engineers, writing their own code and growing libraries that can be used to solve increasingly difficult decision-making problems. We use LLMs, Bayesian program learning, inductive logic programming, SAT solvers, and heuristic search to synthesize programs.

Learning to accelerate planning

Even with good abstractions, online planning can be slow, especially in high-dimensional environments with many objects. Robots should learn to plan better and faster over time. We can automatically accelerate planning by learning object-centric task abstractions, learning to self-impose constraints, or learning heuristics.

Planning to learn

Robots should plan to practice to get better at planning. They should rapidly learn to specialize to the objects, goals, preferences, and constraints that are unique to their deployment. We can plan to learn samplers, predicates, and operators for bilevel planning. Our ultimate goal is to create a virtuous cycle of learning and planning.

Assistive robotics

Our ultimate goal is to use robots to help people, and we are especially focused on helping people with activities of daily living (ADLs) such as feeding, dressing, and transferring from bed to wheelchair. We study general problems that span multiple ADLs, e.g., continual personalization of a task and motion planner.

code

I am a big fan of open-source code and open science. I typically develop research projects out in the open, not in private repos. You can find the code for all past research projects led by me on my GitHub or linked from the respective papers. Here are some quick links: