Home | Call For Papers | Organizers | Keynote | Accepted Papers | Schedule |
2:00 – 2:10 pm | Opening Remarks |
2:10 – 2:40 pm |
Keynote Talk 1: Toward Ethical AI for the Neglected Tails Dr. James Caverlee, Texas A&M University Abstract: In this talk, I will share some of our recent work centered on fairness and bias with an eye toward the “neglected tails” in applications like recommendation, LLMs, vision-language models, and speech systems. For example, these systems often demonstrate strong performance on popular concepts (or items or users), but in many cases there is a gap in the treatment of rare (or tail) concepts. Can we bridge this gap? These and similar questions are motivated by the Rawlsian max-min principle to design systems that maximize the least well-off (or “tails” of the distribution). |
2:40 – 3:40 pm |
Accepted Paper Talks 1
|
3:40 – 4:10 pm |
Keynote Talk 2: Trustworthy LLMs: Detection and Red-Teaming Manish Nagireddy, IBM Research Abstract: Trustworthy AI is paramount to the responsible use of AI systems. Despite the rising popularity of large language models (LLMs), their generative nature amplifies existing harms related to trust (such as fairness, robustness, transparency, etc.) and reveals new dangers (such as hallucinations, toxicity, etc.). I will first go through a catalog of harms and delve more deeply into how such harms can be automatically measured with detectors. Notably, these detectors can be applied throughout the LLM lifecycle (from filters on pre-training data to reward models during alignment to guardrails after deployment). Then, I will go through an example of developing a benchmark to capture a unique harm that was discovered via interactive probing. Next, I will combine both ideas to describe the development of a nuanced detector. Finally, I will end with future thoughts on the need for dynamic and participatory evaluation practices (such as red-teaming) and next steps for more trustworthy systems. |
4:10 – 4:25 pm | Coffee Break |
4:25 – 4:55 pm |
Keynote Talk 3 Dr. Tyler Derr, Vanderbilt University |
4:55 – 5:55 pm |
Accepted Paper Talks 2
|
5:55 – 6:00 pm | Closing |