On the role of Large Language Models in Planning

Large Language Models Sequential Decision Making Automated Planning

AAAI 2024 Tutorial

Wednesday, February 21 Afternoon (2pm-6pm).
Learn More AAAI 2024

Overview


Large Language Models (LLMs, or n-gram models on steroids) that have been trained originally to generate text by repeatedly predicting the next word in the context of a window of previous words, have captured the attention of the AI (and the world) community. Part of the reason for this is their ability to produce meaningful completions for prompts relating to almost any area of human intellectual endeavors. This sheer versatility has also led to claims that these predictive text completion systems may be capable of abstract reasoning and planning. In this tutorial we take a critical look at the ability of LLMs to help in planning tasks–either in autonomous modes, or in assistive modes. We are particularly interested in characterizing these abilities–if any–in the context of problems and frameworks widely studied in the AI planning community.


The tutorial will both point out the fundamental limitations of LLMs in generating plans that will normally require resolving subgoal interactions with combinatorial search, and also show constructive uses of LLMs as complementary technologies to the sound planners that are developed in the AI Planning community. In addition to presenting our own work in this area, we provide a critical survey of many related efforts, including by researchers outside of the planning community.

Topics covered

The preliminary list of topics to be covered in this tutorial includes:
  1. Background on LLMs, and patterns of LLM use–including prompting techniques
  2. Differentiating the use of transformer architectures vs. Pre-trained LLMs in planning
    • Mention Word2vec to plan, decision transformers, Our work on using GPT2 with fine tuning , learning verifiers
  3. LLMs & Planning - Autonomous mode
    • Prompting in natural language or direct PDDL; effect of fine tuning; chain-of-thought prompting etc.
    • Limitation of self-critiquing and verification abilities of LLMs in reasoning/planning.
  4. LLMs as heuristics/idea generators for planning
    • Connections to Case-Based and Model-Lite planning
  5. Search by Back-prompting LLMS
    • Automated vs. Human-driven back-prompts (and the Clever Hans problem with the latter)
  6. LLMs as model acquisition techniques
  7. LLMs as vehicles to support general types of planning
    • Incompletely specified (highly disjunctive) goals; HTN planning; “generalized planning”
    • Use of LLMs in RL settings (to get rewards, preferences)

Materials

Perspective Paper Slides
Videos

Organizers

Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He served as the president of the Association for the Advancement of Artificial Intelligence, a trustee of the International Joint Conference on Artificial Intelligence, the chair of AAAS Section T (Information, Communication and Computation), and a founding board member of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.

Home

Karthik Valmeekam is a third-year Ph.D. student at Arizona State University working at the Yochan Lab under the guidance of Prof. Subbarao Kambhampati. His research primarily focuses on Large Language Models (LLMs) and reasoning, with a special emphasis on exploring the planning abilities of LLMs. This includes understanding the various roles that LLMs can play in planning and reasoning about actions and change. He has also made contributions in areas like Human Aware AI Planning and Preference Based Reinforcement Learning. His research has been recognized at major AI conferences such as NeurIPS, ICLR, and ICAPS.

Home

Lin Guan is a fifth-year PhD student at Arizona State University under the supervision of Prof. Subbarao Kambhampati. His research primarily focuses on building intelligent decision-making agents through methods such as reinforcement learning from human feedback (i.e., RLHF) and plan generation with large language models (i.e., LLM-based AI agents). His research has been recognized at top-tire AI conferences such as NeurIPS, ICLR, and ICML.

Home

Citation

This version of the tutorial maybe cited as:

S. Kambhampati, K. Valmeekam & L. Guan. (2024, February). On the Role of Large Language Models in Planning. Tutorial presented at The 38th Annual AAAI Conference on Artificial Intelligence, Vancouver. https://aaai.org/aaai-conference/aaai-24-tutorial-and-lab-list/#th20.

            @misc{kambhampati2023role,
                author = {Kambhampati, Subbarao. and Valmeekam, Karthik. and Guan, Lin.},
                title = {On the Role of Large Language Models in Planning},
                year = {2024},
                month = {February},
                note = {Tutorial presented at The 38th Annual AAAI Conference on Artificial Intelligence, Vancouver},
                url = {https://aaai.org/aaai-conference/aaai-24-tutorial-and-lab-list/#th20}
            }