Talk: ‘On Crises: Crisis Informatics and Large Language Model Alignment’ by A. KhudaBukhsh, RIT

Talk: 'On Crises: Crisis Informatics and Large Language Model Alignment' by A. KhudaBukhsh, RIT

Date & Time: Friday, 21/6 at 11 am to 12 noon

Location: To be announced

Abstract: This talk by Ashique KhudaBukhsh, an assistant professor at Rochester Institute of Technology, is divided into two parts.

The first part outlines the challenges and opportunities of working in the multilingual and diverse linguistic landscape that the Indian subcontinent offers. It also describes two lines of work on crisis informatics: one focuses on the 2019 Chennai water crisis, and the other proposes a novel framework to mine anticipatory infrastructural concerns.

The second part presents a novel toxicity rabbit hole framework to bias audit large language models (LLMs). Starting with a stereotype, the framework instructs the LLM to generate more toxic content than the stereotype. Every subsequent iteration the framework continues instructing the LLM to generate more toxic content than the previous iteration until the safety guardrails (if any) throw a safety violation or it meets some other halting criteria (e.g., identical generation or rabbit hole depth threshold). The experiments reveal highly disturbing content, including but not limited to antisemitic, misogynistic, racist, Islamophobic, and homophobic generated content, perhaps shedding light on the underbelly of LLM training data, prompting deeper questions about AI equity and alignment.