Baichuan 2: Open large-scale language models A Yang, B Xiao, B Wang, B Zhang, C Bian, C Yin, C Lv, D Pan, D Wang, ... arXiv preprint arXiv:2309.10305, 2023 | 454* | 2023 |
Beavertails: Towards improved safety alignment of llm via a human-preference dataset J Ji, M Liu, J Dai, X Pan, C Zhang, C Bian, R Sun, Y Wang, Y Yang NeurIPS 2023, 2023 | 254 | 2023 |
Safe rlhf: Safe reinforcement learning from human feedback J Dai, X Pan, R Sun, J Ji, X Xu, M Liu, Y Wang, Y Yang The Twelfth International Conference on Learning Representations (Spotlight), 2024 | 179 | 2024 |
Ai alignment: A comprehensive survey J Ji, T Qiu, B Chen, B Zhang, H Lou, K Wang, Y Duan, Z He, J Zhou, ... arXiv preprint arXiv:2310.19852, 2023 | 179 | 2023 |
Bi-dexhands: Towards human-level bimanual dexterous manipulation Y Chen, Y Geng, F Zhong, J Ji, J Jiang, Z Lu, H Dong, Y Yang IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023 | 114* | 2023 |
Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark J Ji, B Zhang, J Zhou, X Pan, W Huang, R Sun, Y Geng, Y Zhong, J Dai, ... NeurIPS 2023, 2023 | 61* | 2023 |
Constrained update projection approach to safe policy optimization L Yang, J Ji, J Dai, L Zhang, B Zhou, P Li, Y Yang, G Pan NeurIPS 2022, 2023 | 45 | 2023 |
Aligner: Efficient alignment by learning to correct J Ji, B Chen, H Lou, D Hong, B Zhang, X Pan, T Qiu, J Dai, Y Yang NeurIPS 2024 Oral Presentation, 2024 | 43* | 2024 |
Omnisafe: An infrastructure for accelerating safe reinforcement learning research J Ji, J Zhou, B Zhang, J Dai, X Pan, R Sun, W Huang, Y Geng, M Liu, ... JMLR 2024, 2023 | 34 | 2023 |
Heterogeneous-Agent Reinforcement Learning Y Zhong, JG Kuba, S Hu, J Ji, Y Yang JMLR, 2023 | 33 | 2023 |
The application of large language models in medicine: A scoping review X Meng, X Yan, K Zhang, D Liu, X Cui, Y Yang, M Zhang, C Cao, J Wang, ... Iscience 27 (5), 2024 | 20 | 2024 |
Cup: A conservative update policy algorithm for safe reinforcement learning L Yang, J Ji, J Dai, Y Zhang, P Li, G Pan arXiv preprint arXiv:2202.07565, 2022 | 18 | 2022 |
Augmented proximal policy optimization for safe reinforcement learning J Dai, J Ji, L Yang, Q Zheng, G Pan Proceedings of the AAAI Conference on Artificial Intelligence 37 (6), 7288-7295, 2023 | 13 | 2023 |
SafeDreamer: Safe Reinforcement Learning with World Models W Huang, J Ji, B Zhang, C Xia, Y Yang ICLR 2024, 2023 | 12 | 2023 |
Pku-beaver: Constrained value-aligned llm via safe rlhf J Dai, X Pan, J Ji, R Sun, Y Wang, Y Yang | 11 | 2023 |
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference J Ji, D Hong, B Zhang, B Chen, J Dai, B Zheng, T Qiu, B Li, Y Yang arXiv preprint arXiv:2406.15513, 2024 | 9* | 2024 |
VOCE: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning J Guan, G Chen, J Ji, L Yang, A Zhou, Z Li NeurIPS 2023, 2023 | 9 | 2023 |
MyoChallenge 2022: Learning contact-rich manipulation using a musculoskeletal hand V Caggiano, G Durandau, H Wang, A Chiappa, A Mathis, P Tano, N Patel, ... NeurIPS 2022 Competition Track, 233-250, 2023 | 7 | 2023 |
Reward Generalization in RLHF: A Topological Perspective T Qiu, F Zeng, J Ji, D Yan, K Wang, J Zhou, Y Han, J Dai, X Pan, Y Yang arXiv preprint arXiv:2402.10184, 2024 | 4 | 2024 |
Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback J Zhou, J Ji, J Dai, Y Yang arXiv preprint arXiv:2409.00162, 2024 | 1 | 2024 |