Welcome :)

Hello there! I’m Ke Yang (杨可), currently a second-year Ph.D. at UIUC under the guidance of Professor Chengxiang Zhai. I hold a bachelor’s degree from Tsinghua University, where I contributed as a research assistant in Professor Jie Tang’s research group. In 2022 summer, I had the opportunity to intern with Professor Heng Ji’s group at UIUC. I interned at Amazon in 2024 summer.

My academic journey revolves around natural language processing, with a special fervor for intelligent agents, language models, graph neural networks, and multimodality foundation models. During the winter break of 2022, I orchestrated a collaboration with a team of skilled software engineers. Our collective efforts culminated in the creation of Zempath, an innovative online social platform that introduces chatbots endowed with distinctive personalities. My attention is also finely attuned to NLP for societal benefit and efficient learning methodologies.

Main Publications

arXiv 2024
yang2024agentoccamsimplestrongbaseline

AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents

Ke Yang, Yao Liu, Sapana Chaudhary, Rasool Fakoor, Pratik Chaudhari, George Karypis, Huzefa Rangwala

Our AgentOccam surpasses the previous state-of-the-art and concurrent LLM-based web agent with its observation and action space alignment. We achieve this without using in-context examples, new agent roles, online feedback or search strategies.

NeurIPS 2024
2024prejudice

Prejudice and Volatility: A Statistical Framework for Measuring Social Discrimination in Large Language Models

Yiran Liu*, Ke Yang*, Zehan Qi, Xiao Liu, Yang Yu, Chengxiang Zhai (* indicates equal contributions)

Prejudice-Volatility Framework comprehensively measures discrimination in models by considering both their consistently biased preference and preference variation across diverse contexts.

ICLR Workshop 2024
yang2024llm

If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents

Ke Yang*, Jiateng Liu*, John Wu, Chaoqi Yang, Yi R. Fung, Sha Li, Zixuan Huang, Xu Cao, Xingyao Wang, Yiquan Wang, Heng Ji, Chengxiang Zhai (* indicates equal contributions)

The Wizard survey explores the synergy between code and large language models (LLMs), highlighting how code empowers LLMs and benefits LLM when they serve as intelligent agents. We emphasized code’s readability, symbolic abstraction, and graph structure, presenting it as a valuable component in LLMs’ training corpus.

AAAI 23
yang2022adept

ADEPT: A DEbiasing PrompT Framework

Ke Yang, Charles Yu, Yi Fung, Manling Li, Heng Ji

ADEPT introduces a novel debaising loss function based on counterfactual bias and manifold learning insights. “Prompt” here refers to prompt-tuning (peft) rather than prompt-engineering.

Icon Zempath

In the promotional video for Zempath, we unveil our driving inspirations and fundamental principles. We showcase the seamless user experience of engaging in chats, posting either anonymously or under one’s real name, indulging in conversations with our personalized chatbots, and forging new connections with like-minded individuals. Let’s delve into a snippet from this captivating video:

Zempath

Miscellaneous

Although I was born and raised in Shanghai, China, my true origins trace back to a serene and lesser-known village in Anhui. It’s there that my family is the proud custodian of a golden paddy field and a haven for wild geese!

I am an amateur novelist, painter, and photographer. I take photos of cats, my sister, grandparents, friends, campus, etc., in my spare time.