Sustainability

Blog Posts

Discussion Seminars and Webinars

Innovation Grants

Talks

Workshop Papers

Venue Title
NeurIPS 2023 How to Recycle: General Vision-Language Model without Task Tuning for Predicting Object Recyclability (Papers Track)
Abstract and authors: (click to expand)

Abstract: Waste segregation and recycling place a crucial role in fostering environmental sustainability. However, discerning the whether a material is recyclable or not poses a formidable challenge, primarily because of inadequate recycling guidelines to accommodate a diverse spectrum of objects and their varying conditions. We investigated the role of vision-language models in addressing this challenge. We curated a dataset consisting >1000 images across 11 disposal categories for optimal discarding and assessed the applicability of general vision-language models for recyclability classification. Our results show that Contrastive Language-Image Pre- training (CLIP) model, which is pretrained to understand the relationship between images and text, demonstrated remarkable performance in the zero-shot recycla- bility classification task, with an accuracy of 89%. Our results underscore the potential of general vision-language models in addressing real-world challenges, such as automated waste sorting, by harnessing the inherent associations between visual and textual information.

Authors: Eliot Park (Harvard Medical School); Eddy Pan (Harvard Medical School); Shreya Johri (Harvard Medical School); Pranav Rajpurkar (Harvard Medical School)