About Me
And you may contribute a verse."
Hi, this is Yue Lin (/ˈjuːeɪ lɪn/, or 林越 in Chinese), and welcome to my personal website. Currently I am a Ph.D. student (data science program) in School of Data Science at The Chinese University of Hong Kong, Shenzhen, fortunately advised by Prof. Baoxiang Wang [Homepage] and Prof. Hongyuan Zha [Google Scholar]. I am also a joint Ph.D. student at the Shenzhen Loop Area Institute.
- My research interests lie primarily in designing efficient learning algorithms to guide agents toward better equilibriums in multi-agent tasks. For experts: (1) solving sequential social dilemmas, by designing learning algorithms that incorporate mechanism design, and (2) designing learning methods for mechanism design problems.
- Most of my focus is on sequential mixed-motive multi-agent scenarios, and I am particularly interested in communication mechanisms (keywords: information design, Bayesian persuasion, cheap talk), as well as some other mechanisms that influence others, such as incentivization.
- My expertise in learning methods lies in reinforcement learning (RL), particularly multi-agent hyper-gradient modeling, and in large language models (LLMs).
Research Interests
Currently
- Multi-Agent Reinforcement Learning
- Game Theory: Information Design
- LLMs for Game Solvers
Formerly
- Redundant Manipulator Control
- Robotic Mechanism Design
Education & Experience
Education
- Shenzhen Loop Area Institute
Joint Ph.D. Student (2025.9 - Present) - The Chinese University of Hong Kong, Shenzhen
Ph.D. Student in Data Science (2024.8 - Present) - Tianjin Polytechnic University
Bachelor of Engineering in Computer Science and Technology (2018.9 - 2022.6)- School of Computer Science and Technology (2019.9 - 2022.6)
GPA: 3.89 / 4 (92.22 / 100); Rank: 1 / 127
[Certification] - School of Mechanical Engineering (2018.9 - 2019.6)
GPA: 3.90 / 4 (92.00 / 100); Rank: 1 / 60
- School of Computer Science and Technology (2019.9 - 2022.6)
Experience
- The Chinese University of Hong Kong, Shenzhen
Research Assistant in School of Data Science (2022.2 - Present)
Selected Publications
Data Science
- Information Design in Multi-Agent Reinforcement Learning.
Yue Lin, Wenhao Li, Hongyuan Zha, Baoxiang Wang.
Neural Information Processing Systems (NeurIPS) 2023.Poster. This is currently my most representative work.
[Paper] [Code] [Experiments] [Blog en] [Blog cn] [Zhihu cn] [Slides] [Talk en] [Talk RLChina]
[Patent][Click to check the Abstract]
Reinforcement learning (RL) is inspired by the way human infants and animals learn from the environment. The setting is somewhat idealized because, in actual tasks, other agents in the environment have their own goals and behave adaptively to the ego agent. To thrive in those environments, the agent needs to influence other agents so their actions become more helpful and less harmful. Research in computational economics distills two ways to influence others directly: by providing tangible goods (mechanism design) and by providing information (information design). This work investigates information design problems for a group of RL agents. The main challenges are two-fold. One is the information provided will immediately affect the transition of the agent trajectories, which introduces additional non-stationarity. The other is the information can be ignored, so the sender must provide information that the receiver is willing to respect. We formulate the Markov signaling game, and develop the notions of signaling gradient and the extended obedience constraints that address these challenges. Our algorithm is efficient on various mixed-motive tasks and provides further insights into computational economics. Our code is publicly available at https://github.com/YueLin301/InformationDesignMARL.
[Click to check the BibTex code]
@article{lin2023information, title={Information design in multi-agent reinforcement learning}, author={Lin, Yue and Li, Wenhao and Zha, Hongyuan and Wang, Baoxiang}, journal={Advances in Neural Information Processing Systems}, volume={36}, pages={25584--25597}, year={2023} }
- Information Bargaining: Bilateral Commitment in Bayesian Persuasion.
Yue Lin, Shuhui Zhu, William A Cunningham, Wenhao Li, Pascal Poupart, Hongyuan Zha, Baoxiang Wang.
EC 2025 Workshop: Information Economics and Large Language Models.The title of an alternative version: Bayesian Persuasion as a Bargaining Game.
[Paper] [Code & Experiments][Click to check the Abstract]
Bayesian persuasion, an extension of cheap-talk communication, involves an informed sender committing to a signaling scheme to influence a receiver’s actions. Compared to cheap talk, this sender’s commitment enables the receiver to verify the incentive compatibility of signals beforehand, facilitating cooperation. While effective in one-shot scenarios, Bayesian persuasion faces computational complexity (NP-hardness) when extended to long-term interactions, where the receiver may adopt dynamic strategies conditional on past outcomes and future expectations. To address this complexity, we introduce the bargaining perspective, which allows: (1) a unified framework and well-structured solution concept for long-term persuasion, with desirable properties such as fairness and Pareto efficiency; (2) a clear distinction between two previously conflated advantages: the sender’s informational advantage and first-proposer advantage. With only modest modifications to the standard setting, this perspective makes explicit the common knowledge of the game structure and grants the receiver comparable commitment capabilities, thereby reinterpreting classic one-sided persuasion as a balanced information bargaining framework. The framework is validated through a two-stage validationand-inference paradigm: We first demonstrate that GPT-o3 and DeepSeek-R1, out of publicly available LLMs, reliably handle standard tasks; We then apply them to persuasion scenarios to test that the outcomes align with what our informationbargaining framework suggests. All code, results, and terminal logs are publicly available at https://github.com/YueLin301/InformationBargaining.
[Click to check the BibTex code]
@article{lin2025bayesian, title={Bayesian Persuasion as a Bargaining Game}, author={Lin, Yue and Zhu, Shuhui and Cunningham, William A and Li, Wenhao and Poupart, Pascal and Zha, Hongyuan and Wang, Baoxiang}, journal={arXiv preprint arXiv:2506.05876}, year={2025} }
- Verbalized Bayesian Persuasion.
Wenhao Li, Yue Lin, Xiangfeng Wang, Bo Jin, Hongyuan Zha, Baoxiang Wang.
EC 2025 Workshop: Information Economics and Large Language Models.[Click to check the Abstract]
Information design (ID) explores how a sender influence the optimal behavior of receivers to achieve specific objectives. While ID originates from everyday human communication, existing game-theoretic and machine learning methods often model information structures as numbers, which limits many applications to toy games. This work leverages LLMs and proposes a verbalized framework in Bayesian persuasion (BP), which extends classic BP to real-world games involving human dialogues for the first time. Specifically, we map the BP to a verbalized mediator-augmented extensive-form game, where LLMs instantiate the sender and receiver. To efficiently solve the verbalized game, we propose a generalized equilibrium-finding algorithm combining LLM and game solver. The algorithm is reinforced with techniques including verbalized commitment assumptions, verbalized obedience constraints, and information obfuscation. Numerical experiments in dialogue scenarios, such as recommendation letters, courtroom interactions, and law enforcement, validate that our framework can both reproduce theoretical results in classic BP and discover effective persuasion strategies in more complex natural language and multi-stage scenarios.
[Click to check the BibTex code]
@article{li2025verbalized, title={Verbalized Bayesian Persuasion}, author={Li, Wenhao and Lin, Yue and Wang, Xiangfeng and Jin, Bo and Zha, Hongyuan and Wang, Baoxiang}, journal={arXiv preprint arXiv:2502.01587}, year={2025} }
Robotics
- Innovative Design and Simulation of a Transformable Robot with Flexibility and Versatility, RHex-T3.
Yue Lin, Yujia Tian, Yongjiang Xue, Shujun Han, Huaiyu Zhang, Wenxin Lai, Xuan Xiao.
International Conference on Robotics and Automation (ICRA) 2021.Oral. Delivered a presentation at the Xi’an conference venue.
[Paper] [Blog] [Demo Videos][Click to check the Abstract]
This paper presents a transformable RHex-inspired robot, RHex-T3, with high energy efficiency, excellent flexibility and versatility. By using the innovative 2-DoF transformable structure, RHex-T3 inherits most of RHex’s mobility, and can also switch to other 4 modes for handling various missions. The wheel-mode improves the efficiency of RHex-T3, and the leg-mode helps to generate a smooth locomotion when RHex-T3 is overcoming obstacles. In addition, RHex-T3 can switch to the claw-mode for transportation missions, and even climb ladders by using the hook-mode. The simulation model is conducted based on the mechanical structure, and thus the properties in different modes are verified and analyzed through numerical simulations.
[Click to check the BibTex Code]
@inproceedings{lin2021innovative, title={Innovative design and simulation of a transformable robot with flexibility and versatility, RHex-T3}, author={Lin, Yue and Tian, Yujia and Xue, Yongjiang and Han, Shujun and Zhang, Huaiyu and Lai, Wenxin and Xiao, Xuan}, booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)}, pages={6992--6998}, year={2021}, organization={IEEE} }
- A snake-inspired path planning algorithm based on reinforcement learning and self-motion for hyper-redundant manipulators.
Yue Lin, Jianming Wang, Xuan Xiao, Ji Qu, Fatao Qin.
International Journal of Advanced Robotic Systems (IJARS) 2022.[Click to check the Abstract]
Redundant manipulators are flexible enough to adapt to complex environments, but their controller is also required to be specific for their extra degrees of freedom. Inspired by the morphology of snakes, we propose a path planning algorithm named Swinging Search and Crawling Control, which allows the snake-like redundant manipulators to explore in complex pipeline environments without collision. The proposed algorithm consists of the Swinging Search and the Crawling Control. In Swinging Search, a collision-free manipulator configuration that of the end-effector in the target point is found by applying reinforcement learning to self-motion, instead of designing joint motion. The self-motion narrows the search space to the null space, and the reinforcement learning makes the algorithm use the information of the environment, instead of blindly searching. Then in Crawling Control, the manipulator is controlled to crawl to the target point like a snake along the collision-free configuration. It only needs to search for a collision-free configuration for the manipulator, instead of searching collision-free configurations throughout the process of path planning. Simulation experiments show that the algorithm can complete path planning tasks of hyper-redundant manipulators in complex environments. The 16 DoFs and 24 DoFs manipulators can achieve 83.3% and 96.7% success rates in the pipe, respectively. In the concentric pipe, the 24 DoFs manipulator has a success rate of 96.1%.
[Click to check the BibTex code]
@article{lin2022snake, title={A snake-inspired path planning algorithm based on reinforcement learning and self-motion for hyper-redundant manipulators}, author={Lin, Yue and Wang, Jianming and Xiao, Xuan and Qu, Ji and Qin, Fatao}, journal={International Journal of Advanced Robotic Systems}, volume={19}, number={4}, pages={17298806221110022}, year={2022}, publisher={SAGE Publications Sage UK: London, England} }
Professional Services
Independent Reviewer
- NeurIPS 2024 [6, 45615], 2025 [5, 32912]
- ICLR 2025 [3, 21831], 2026 [0]
- ICML 2025 [6, 32893]
- TMLR 2025 [2, 38363]
Volunteer
- AAMAS 2024 [3, 9876], 2025 [2, 4792]
- ICML (Position) 2025 [2, 5016]
Numbers in brackets indicate the number of manuscripts reviewed and the character count of all reviews, respectively. A “0” means the invitation was accepted, but no review assignment has been made yet. Total reviews: 29.
Teaching
Teaching Assistant
- CSC6021/AIR6001 Artificial Intelligence (2024-25 Term 2).
Patents
- 多智能体强化学习通信方法、终端设备及存储介质
发明人:林越、李文浩、查宏远、王趵翔
申请人:香港中文大学(深圳)
类型:发明
状态:已授权专利号:ZL 2023 1 0397744.0;授权公告号:CN 116455754 B;授权公告日:2025.9.16
[证书]
Hobbies
-
Video Games
- 王者荣耀 Honor of Kings (已退坑)
- 全国第74 诸葛亮,战力13853,胜率57.5%,场次1956
- 浙江省第57 嫦娥,战力10725,胜率64.3%,场次210
- 浙江省第42 米莱狄,战力7863,胜率54.6%,场次414
- 巅峰赛打野2063分(全国第3892名),中路1849分,巅峰场次共800
- Overwatch 1 (已退坑)
- Doomfist: 284 hours, 1669 matches, win rate 54%, kill/death 26068/13137
- Steam
- Hollow Knight, Hollow Knight: Silksong, The Stanley Parable, Batman: Arkham Knight, Marvel Rivals...
- 王者荣耀 Honor of Kings (已退坑)
- Movies
Contact
Types | ID |
---|---|
linyue3h1@gmail.com | |
R01SVP | |
GitHub | github.com/YueLin301 |
Google Scholar | Yue Lin |
Zhihu | R01SVP |
Bilibili | 不梦眠 |