Our group is focused on Trustworthy and Explainable Artificial Intelligence (AI). That is, we are interested in understanding the weaknesses of AIs and making them robust and accurate in rare, unseen scenarios. We believe that one promising approach to better accuracy in rare/hard cases is to encourage AIs to explain themselves (where the exact formats of explanation depend on the application). Besides improving accuracy, the ability to explain also helps AIs communicate with users and enable them to make better human decisions and better collaborate with machines. We’re looking for talented Ph.D. students and postdocs!
We are interested in advancing the state of XAI by evaluating and explaining neural networks from the inner-working perspectives to model outputs using a diverse array of research techniques including:
- Visualizing neural activations (2015) and counterfactual generations (2021)
- Feature visualization a.k.a. activation maximization (2016I, 2016N, 2020)
- Feature importance/attribution (2020C, 2020A, 2022AA, 2022A)
- Quantifying CNN preferences/biases (2020) and failures in natural (2019, 2021, 2024) and adversarial settings (2015, 2018)
- Inverting CNNs for explanation and synthesis (2017, 2022)
- Building image classifiers with an explainable, editable, visual correspondence bottleneck explanations (2022C, 2022N, 2024)
- Building image similarity networks that have explainable attention (2024)
- Testing the effectiveness of explanations to human decision-making accuracy (2021, 2022, 2024)
- Leveraging image similarity models and training-set examples to improve image classification and explainability (2024)
- Red teaming: Evaluating CNNs and Transformers image classifiers (2015, 2019, 2023) and NLP text classifiers (2021), and vision-language models (2024C, 2024A)
Sponsors
Our lab has been grateful to receive funding from:
- National Science Foundation CAREER (#2145767)
- NaphCare Charitable Foundation (gift)
- National Science Foundation CRII (#1850117)
- Adobe Research, Amazon, Google, Nvidia
The following prestigious fellowships have supported our students:
- AU Presidential Graduate Research Fellowships (PGRF): Giang Nguyen (2021), Huy Hung Nguyen (2023)
- Woltosz Fellowship: Michael Alcorn (2018), Hai Phan (2021)
- AL EPSCoR Graduate Research Scholar: Hai Phan (2022)
- AU Charles Gavin Research Fellowship: Pooyan Rahmanzadehgervi (2022)
Our partner industry companies have recruited our students for summer internships:
- Adobe Research: Michael Alcorn (2019), Thang Pham (2021), Hai Phan (2022), Thang Pham (2023, 2024), Mohammad Taesiri (2024)
- Bosch: Qi Li (2020), Renan Gomez (2021)
- Noteworthy AI: Peijie Chen (2023, 2024)
Members
Name | Status | Before Auburn | Internships during Ph.D. |
---|---|---|---|
Thang Pham | Ph.D. student (2018 – present) | ML engineer @ AIST (Tokyo); BS @ VNUHCM (Vietnam) | 2021 Adobe, 2023 Adobe |
Giang Nguyen | AU PGRF Fellow Ph.D. student (2021 – present) | BS @ HUST (Hanoi); MS @ KAIST (Korea) | JP Morgan 2024 |
Pooyan Rahmanzadehgervi | AU Charles Gavin Research Fellow Ph.D. student (2022 – present) | MS @ Ferdowsi U of Mashhad (Iran) | |
Tin Nguyen | Ph.D. student (2022 – present) | BS @ VNUHCM (Vietnam); MS @ Sejong U (Korea) | |
Hung Huy Nguyen | AU PGRF Fellow Ph.D. student (2023 – present) | BS @ HUST (Vietnam); MS @ Seoul National University (Korea) | |
Logan Bolton | Undergraduate student (2024 – present) | BS @ Auburn U |
Alumni
Name | Status | Before Auburn | Internships during Ph.D. | After Auburn | Now (2024) |
---|---|---|---|---|---|
Hai Phan | Woltosz Fellow, EPSCoR GRSF Fellow Ph.D. student (2021 – 2024) | MS @ Carnegie Mellon U (Pittsburgh); BS @ VNUHCM | 2022 Adobe; 2022 Meta | Research Engineer at Blue Marble Geographics | Research Engineer at Blue Marble Geographics |
Peijie Chen | Ph.D. student (2019 – 2024) | BS @ Shenzhen U | 2022 Noteworthy AI; 2024 Noteworthy AI | Senior ML Engineer at Noteworthy AI | Senior ML Engineer at Noteworthy AI |
Mohammad Reza Taesiri | Remote collaborator (2020 – 2024) | M.S. Sharif U | 2023 Ubisoft, 2024 Adobe | Ph.D. student at U Alberta | Ph.D. student at U Alberta |
Renan Alfredo Rojas Gomez | Ph.D. student (2021-2023) co-advised with Prof. Minh Do at UIUC | 2021 Bosch, 2024 DeepMind | Ph.D. student at UIUC | ||
Michael Alcorn | 2018 Woltosz Fellow
Ph.D. student (2018-2021) |
ML engineer @ Red Hat; MS @ UT Dallas; BS @ Auburn | 2019 Adobe
2020 Cleveland Indians; |
Postdoc at USDA | Senior ML Engineer Bear Flag Robotics |
Qi Li | M.S. student (2018-2021) | BS @ Southeast U (Nanjing) | 2020 Bosch intern | Ph.D. student | |
Chirag Agarwal | Ph.D. student (2018-2020) | MS @ U Illinois at Chicago | 2017 Kitware intern
2018 Tempus labs intern 2018 Bosch intern |
Postdoc at Harvard, Research Scientist at Adobe Research | Research Fellow at Harvard |
Naman Bansal | M.S. student (2018-2020) | BS @ IIT-G | 2022 Facebook/Meta intern | Ph.D. student at Auburn | |
Zhitao Gong | Ph.D. student (2017 – 2019) co-advised with Prof. Wei-shinn Ku | BS @ Nanjing U | 2017 Google intern
2018 Facebook intern |
Research Engineer at DeepMind | Research Engineer at DeepMind |
Nader Akoury | Remote collaborator (2017 – 2018) | BS @ Duke; Software Engineer at Yelp | Ph.D. student at U Mass Amherst | ||
Vishaal Kabilan | Research Intern (2017 – 2018) | MS @ Worcester Polytechnic | Data scientist at IBM Bangalore | ||
Brandon Morris | B.S. student (2017 – 2018) | DoD SMART Ph.D. student at ASU |