Welcome :)

Hello there! I’m Ke Yang (杨可), currently embarking on my first year of pursuing a Ph.D. at UIUC under the guidance of Professor Chengxiang Zhai. I hold a bachelor’s degree in Automation from Tsinghua University, where I had the privilege of contributing as a research assistant within Professor Jie Tang’s research group. During the summer of 2022, I had the opportunity to intern with Professor Heng Ji’s esteemed group at UIUC.

My academic journey revolves around the captivating realm of natural language processing, with a special fervor for the domains of intelligent agents, language models, graph neural networks, and multimodality foundation models. During the winter break of 2022, I orchestrated a collaboration with a team of skilled software engineers. Our collective efforts culminated in the creation of Zempath, an innovative online social platform that introduces chatbots endowed with distinctive personalities. My attention is also finely attuned to NLP for societal benefit and efficient learning methodologies.

Main Publications

arXiv 2024
2024prejudice

Prejudice and Caprice: A Statistical Framework for Measuring Social Discrimination in Large Language Models

Yiran Liu*, Ke Yang*, Zehan Qi, Xiao Liu, Yang Yu, Chengxiang Zhai (* indicates equal contributions)

Prejudice-Caprice Framework comprehensively measures discrimination in models by considering both their consistently biased preference and preference variation across diverse contexts.

ICLR Workshop 2024
yang2024llm

If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents

Ke Yang*, Jiateng Liu*, John Wu, Chaoqi Yang, Yi R. Fung, Sha Li, Zixuan Huang, Xu Cao, Xingyao Wang, Yiquan Wang, Heng Ji, Chengxiang Zhai (* indicates equal contributions)

The Wizard survey explores the synergy between code and large language models (LLMs), highlighting how code empowers LLMs and benefits LLM when they serve as intelligent agents. We emphasized code’s readability, symbolic abstraction, and graph structure, presenting it as a valuable component in LLMs’ training corpus.

AAAI 23
yang2022adept

ADEPT: A DEbiasing PrompT Framework

Ke Yang, Charles Yu, Yi Fung, Manling Li, Heng Ji

ADEPT introduces a novel debaising loss function based on counterfactual bias and manifold learning insights. “Prompt” here refers to prompt-tuning (peft) rather than prompt-engineering.

Icon Zempath

In the promotional video for Zempath, we unveil our driving inspirations and fundamental principles. We showcase the seamless user experience of engaging in chats, posting either anonymously or under one’s real name, indulging in conversations with our personalized chatbots, and forging new connections with like-minded individuals. Let’s delve into a snippet from this captivating video:

Zempath

Miscellaneous

Although I was born and raised in Shanghai, China, my true origins trace back to a serene and lesser-known village in Anhui. It’s there that my family is the proud custodian of a golden paddy field and a haven for wild geese!

I am an amateur novelist, painter, and photographer. I take photos of cats, my sister, grandparents, friends, campus, etc., in my spare time.