I'm a fifth-year CS Ph.D. student in the School of Computing at the University of Utah, advised by Jeff M. Phillips. My research aims at understanding the geometry of distributed embeddings, contextualized embeddings, and feature space representations for improved interpretability and downstream task utility within the context of Cross/Multi-lingual Natural Language Processing, Ethical, and Responsible AI. I am very interested in Natural Language Processing, Speech Representation Learning, and Knowledge Representation Learning (esp. graph-based methods), particularly in Monolingual, Cross-lingual/Multilingual, and Multi-Modal settings.
I am currently working on two exciting projects: 1) The first project dives into the understanding of models that jointly predict task labels and generate free-text explanations for their predictions, also known as self-rationalization models. They are of great interest in modern Explainable AI since it leads to a more intuitive interaction with NLP systems. Given the free-text explanations from Self-rationalization models (say a GPT-3 model), our goal is to express or assign an uncertainty score to the generated free-text explanations to understand the level of confidence that the Self-rationalization model places on its own generated free-text explanations.
2) We proposed a novel debiasing technique that makes identified concepts uncorrelated and aligned with the coordinate axis—resulting in improved interpretability of vectorized representation. Our goal now is to extend the idea of aligning uncorrelated concepts onto the coordinate axis to the feature representations learned in the penultimate layer of deep neural networks (DNNs) in other to impose orthogonality in the feature space. The principal idea is to augment the orthogonality property of the softmax cross entropy (CE) loss and simultaneously enforce an improved inter-class separation and intra-class clustering in the feature space.
Before coming to the University of Utah, I obtained an MS in Mathematics from the University of Texas at El Paso, where I worked on Stochastic Optimal Control Theory with Applications to Financial Mathematics under the supervision of Michael Pokojovy. I also hold a BA in Economics and Mathematics from the University of Ghana, where I worked under Mrs. Lilian Frempomaa Kyei.
Most recent publications on Google Scholar.
Interpretable Debiasing of Vectorized Language Representations with Iterative Orthogonalization
Prince Osei Aboagye, Yan Zheng, Jack Shunn, Chin-Chia Michael Yeh, Junpeng Wang, Zhongfang Zhuang, Huiyuan Chen, Liang Wang, Wei Zhang, and Jeff Phillips
International Conference on Learning Representations (ICLR). 2023.
Normalization of Language Embeddings for Cross-Lingual Alignment
Prince Osei Aboagye, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei Zhang, Liang Wang, Hao Yang, Jeff Phillips
International Conference on Learning Representations (ICLR). 2022.
Quantized Wasserstein Procrustes Alignment of Word Embedding Spaces
Prince O Aboagye, Yan Zheng, Michael Yeh, Junpeng Wang, Zhongfang Zhuang, Huiyuan Chen, Liang Wang, Wei Zhang, Jeff Phillips
Conference of the Association for Machine Translation in the Americas (AMTA). 2022.
On Numerical Stochastic Optimal Control via Bellman's Dynamic Programming Principle
Prince Osei Aboagye
Master's Thesis: The University of Texas at El Paso (UTEP). 2018.
Full CV in PDF
Outside of research, I enjoy reading the word of God, praying and playing guitar. Here are a few of my favorite scriptures:
I can do all things through Christ who strengthens me. Philippians 4:13 NKJV
Now the Lord is the Spirit; and where the Spirit of the Lord is, there is liberty. 2 Corinthians 3:17 NKJV