As an Amazon Associate I earn from qualifying purchases.

Amazon and UIUC announce inaugural slate of funded research projects

[ad_1]

Earlier this year, Amazon and the University of Illinois Urbana-Champaign (UIUC) announced the launch of the Amazon-Illinois Center on Artificial Intelligence for Interactive Conversational Experiences (AICE). The center, housed within the Grainger College of Engineering, supports UIUC researchers and students in their development of novel approaches to conversational-AI systems.

Today Amazon and UIUC are announcing the inaugural round of funded research projects along with the first cohort of annual fellowships. The research projects aim to further the development of intelligent conversational systems that demonstrate contextual understanding and emotional intelligence, allow for personalization, and are able to interpret nonverbal communication while being ethical and fair.

Fellowship recipients are conducting research in conversational AI, both to help advance the field and also to support the next generation of researchers. They will be paired with Amazon scientists who will mentor and provide them with a deeper understanding of problems in industry.

Below is a list of the awarded fellows and their research projects, followed by the faculty award recipients and their research projects.

Academic fellowship recipients

Steeve Huang, left, and Ming Zhong, right, are the inaugural academic fellows at the Amazon-Illinois Center on Artificial Intelligence for Interactive Conversational Experiences (AICE).

Steeve Huang is a third-year PhD student and a member of the BLENDER Lab, overseen by Amazon Scholar and computer science professor Heng Ji. Huang’s academic focus is on combating the proliferation of false information. His work in this field encompasses three key research directions: fact checking, fake-news detection, factual-error correction, and enhancing the faithfulness of text generation models. He has built a zero-shot factual-error correction framework that has demonstrated the ability to yield corrections that are more faithful and factual than those provided by traditional supervised methods. In 2022, Huang completed an internship with Amazon where he collaborated with Yang Wang, associate professor of information sciences, and Kathleen McKeown, the Henry and Gertrude Rothschild Professor of Computer Science at Columbia University and an Amazon Scholar.

Ming Zhong is a third-year PhD student in the Data Mining Group and is advised by Jaiwei Han, the Michael Aiken Chair Professor of computer science. Zhong’s research focuses on tailoring conversational AI to meet the diverse needs of individual users, as these systems become increasingly embedded in everyday life. Specifically, he seeks to explore how to better understand conversational content in both human-to-human and human-to-computer interactions, as well as to develop new customized evaluation metrics for conversational AI. He also works on knowledge transfer across various models to boost their efficiency. 

Research projects

Top row, left to right, Volodymyr Kindratenko, Yunzhu Li, and Gagandeep Singh; bottom row, left to right, Shenlong Wang, Romit Roy Choudhury, and Han Zhao.

Volodymyr Kindratenko, director for the Center for Artificial Intelligence Innovation and assistant director at the National Center for Supercomputing Applications, “From personalized education to scientific discovery with AI: Rapid deployment of AI domain experts

“In this project, we aim to develop a knowledge-grounded conversational AI capable of rapidly and effectively acquiring new subject-knowledge on a narrowly defined topic of interest in order to become an “expert” on that topic. We propose a novel factual consistency model that will evaluate whether the answer is backed by a corpus of verified information sources. We will introduce a novel training penalty, beyond cross entropy, termed factuality loss, a method of retrieval-augmented RL with AI feedback. Our framework will also attempt to supervise the reasoning process in addition to outcomes.”

Yunzhu Li, assistant professor of computer science, “Actionable conversational AI via language-grounded dynamic neural fields

“In this proposal, our objective is to develop multimodal foundational models of the world, leveraging dynamic neural fields. If successful, the proposed framework enables three key applications: (1) the construction of a generative and dynamic digital twin of the real world as a data engine for multimodal data generation, (2) the facilitation of conversational AI in embodied environments, and (3) the empowerment of embodied agents to plan and execute real-world interaction tasks.”

Gagandeep Singh, assistant professor of computer science, “Efficient fairness certification of large language models

“In this project, we will develop the first efficient approach to formally certify the fairness of large language models (LLMs) based on the design of novel fairness specifications and probabilistic certification methods. Certificates obtained with our method will provide greater confidence in LLM fairness than possible with current testing-based approaches.”

Shenlong Wang, assistant professor of computer science, and Romit Roy Choudhury, W. J. Jerry Sanders III – Advanced Micro Devices, Inc. Scholar, and an Amazon Scholar, “Integrating spatial perception into conversational AI for real-world task assistance

“We propose novel, effective conversational AI workflows that can acquire, update, and leverage rich spatial knowledge about users and their surrounding environments gathered from multi-modal sensing and perception.”

Han Zhao, assistant professor of computer science, “Responsible conversational AI: Monitoring and improving safe foundation models

“We propose to develop two new general safety measures: Robust-Confidence Safety (RCS) and Self-Consistency Safety (SCS). RCS requires an LLM to recognize a low-confidence scenario when it has to deal with an out-of-distribution (OOD) application instance or rare tail events and thus assign a low confidence score to prevent the potentially incorrect information/response from being generated or delivered to a user. SCS requires an LLM to be self-consistent in any context, so it is considered unsafe with regard to SCS if it generates (logically) inconsistent responses in the same or similar application context (as in such cases, one of them must be false).”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo