Trustworthy AI Lab at Korea University
We conduct foundational research to bring AI into society
News
Nov 2024: Our paper "Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified Robustness" has been accepted to TMLR.
Jul 2024: Our paper "Adversarial Robustification via Text-to-Image Diffusion Models" has been accepted to ECCV 2024 as an oral presentation.
June 2024: Our paper "DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing" has been accepted to ICML 2024 Next Generation of AI Safety Workshop.
Feb 2024: Our lab's website has just been launched!
Who we are
The Trustworthy AI Lab at Korea University (or TAIL for short) is a research group led by Prof. Jongheon Jeong in the Department of Artificial Intelligence. Our mission is to make recent developments in AI not only powerful but also trustworthy and reliable, so that they can be more beneficial when integrated with society.
Open positions: I am looking for self-motivated, curiosity-driven graduate students and undergraduate interns to join our lab. Please reach out to me via email with your CV and transcript if you are interested.
What we do
We conduct research on learning representations that are both useful and societally acceptable. We focus on developing ideas and algorithms that are (a) generalizable in out-of-distribution scenarios and (b) scalable within modern AI-based systems, so that they can be additive in building safer AI as a complex system. Several important research directions we address include, but are not limited to, the following:
AI Safety: "Are we truly prepared to expose AI to the public, even to its potential hazards?"
Robustness: adversarial machine learning, out-of-distribution generalization, test-time adaptation, etc.
Monitoring: novelty/anomaly detection, uncertainty estimation, interpretable AI, etc.
Alignment: preference optimization, believable agents, reward modeling, etc.
Responsibility: copyright protection, deepfake prevention, fairness, privacy, etc.
Foundation Models: "What properties emerge at scale? Is scale either sufficient or necessary for trustworthiness?"
Generative AI: diffusion and flow-based models, language models, high-dimensional vision, etc.
Scalable Representations: multimodal learning, self-supervised learning, robust fine-tuning, etc.
Not only for these topics, we are broadly interested in advancing foundational areas of machine learning and deep learning at large.