Large language model co-pilot for transparent and trusted life cycle assessment comparisons (Proposals Track)

Nathan Preuss (Cornell University); Fengqi You (Cornell University)

Paper PDF Cite
Natural Language Processing Agriculture & Food Public Policy Supply Chains Data Mining Generative Modeling

Abstract

Intercomparing life cycle assessments (LCA), a common type of sustainability and climate model, is difficult due to basic differences in fundamental assumptions, especially in the goal and scope definition stage. This complicates decision-making and the selection of climate-smart policies, as it becomes difficult to compare optimal products and processes between different studies. To aid policymakers and LCA practitioners alike, we plan to leverage large language models (LLM) to build a database containing documented assumptions for LCAs across the agricultural sector, with a case study on livestock management. The articles for this database are identified in a systematic literature search, then processed to extract relevant assumptions about the goal and scope definition of the LCA and inserted into a vector database. We then leverage this database to develop an AI co-pilot by augmenting LLMs with retrieval augmented generation to be used by stakeholders and LCA practitioners alike. This co-pilot will accrue two major benefits: 1) enhance the decision-making process through facilitating comparisons among LCAs to enable policymakers to adopt data-driven climate policies and 2) encourage the use of common assumptions by LCA practitioners. Ultimately, we hope to create a foundational model for LCA tasks that can plug-in with existing open source LCA software and tools.