Foundation
豆包大模型 Foundation 团队负责大模型的工程架构、模型结构设计、代码生成等方面工作
研究领域
工程架构
工作包括大规模分布式训练、高性能推理、结合新硬件的工程架构
模型结构设计
聚焦以更低成本训练更高效模型,包括:MoE 等稀疏模型研究、更高效 Attention 结构、与工程配合的联合优化
代码生成
主要负责从预训练到 RL 对大模型代码效果进行优化
课题方向
大模型结构
设计高效率的大模型结构,用最少的训练、推理成本,获得更好的效果
AI foundation model
Efficient
大规模训练集群
研究超大规模训练集群,如何让训练的稳定性和 MFU 提升,跨集群训练
Large-scale
Stability
推理并行方案
研究如何解决推理的访存 bound,多机推理,各种不同的推理并行方案
Inference
Parallel
更先进的结构与模式
结合下一代计算体系,研究更先进的模型结构、训练模式、推理模式
Computing systems
Advanced
算法创新
研究大模型中 foundation 的算法问题,找寻算法创新
Algorithm
Innovation

精选论文

2024.12.03
FullStack Bench: Evaluating LLMs as Full Stack Coders
As the capabilities of code large language models (LLMs) continue to expand, their applications across diverse code intelligence domains are rapidly increasing. However, most existing datasets only evaluate limited application domains. To address this gap, we have developed a comprehensive code evaluation dataset FullStack Bench focusing on full-stack programming, which encompasses a wide range of application domains (e.g., basic programming, dataanalysis, software engineering, mathematics, and machine learning). Besides, to assess multilingual programming capabilities, in FullStack Bench, we design real-world instructions and corresponding unit test cases from 16 widely-used programming languages to reflect real-world usage scenarios rather than simple translations. Moreover, we also release an effective code sandbox execution tool (i.e, SandboxFusion) supporting various programming languages and packages to evaluate the performance of our FullStack Bench efficiently. Comprehensive experimental results on our FullStack Bench demonstrate the necessity and effectiveness of out FullStack Bench and SandboxFusion.
Siyao Liu, He Zhu, Jerry Liu, Shulin Xin, Aoyan Li, Rui Long, Li Chen, Jack Yang, Jinxiang Xia, Z.Y, Peng, Shukai Liu, Zhaoxiang Zhang, Jing Mai, Ge Zhang, Wenhao Huang, Kai Shen, Liang Xiang
Code Intelligence
2024.10.02
HybridFlow: A Flexible and Efficient RLHF Framework
Reinforcement Learning from Human Feedback (RLHF) is widely used in Large Language Model (LLM) alignment. Traditional RL can be modeled as a dataflow, where each node represents computation of a neural network (NN) and each edge denotes data dependencies between the NNs. RLHF complicates the dataflow by expanding each node into a distributed LLM training or generation program, and each edge into a many-to-many multicast. Traditional RL frameworks execute the dataflow using a single controller to instruct both intra-node computation and inter-node communication, which can be inefficient in RLHF due to large control dispatch overhead for distributed intra-node computation. Existing RLHF systems adopt a multi-controller paradigm, which can be inflexible due to nesting distributed computation and data communication. We propose HybridFlow, which combines single-controller and multi-controller paradigms in a hybrid manner to enable flexible representation and efficient execution of the RLHF dataflow. We carefully design a set of hierarchical APIs that decouple and encapsulate computation and data dependencies in the complex RLHF dataflow, allowing efficient operation orchestration to implement RLHF algorithms and flexible mapping of the computation onto various devices. We further design a 3D-HybridEngine for efficient actor model resharding between training and generation phases, with zero memory redundancy and significantly reduced communication overhead. Our experimental results demonstrate 1.53×~20.57× throughput improvement when running various RLHF algorithms using HybridFlow, as compared with state-of-the-art baselines. HybridFlow source code will be available at this https URL(https://github.com/volcengine/verl).
Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, Chuan Wu
Reinforcement Learning
System Research
2024.02.23
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
We present the design, implementation and engineering experience in building and deploying MegaScale, a production system for training large language models (LLMs) at the scale of more than 10,000 GPUs. Training LLMs at this scale brings unprecedented challenges to training efficiency and stability. We take a full-stack approach that co-designs the algorithmic and system components across model block and optimizer design, computation and communication overlapping, operator optimization, data pipeline, and network performance tuning. Maintaining high efficiency throughout the training process (i.e., stability) is an important consideration in production given the long extent of LLM training jobs. Many hard stability issues only emerge at large scale, and in-depth observability is the key to address them. We develop a set of diagnosis tools to monitor system components and events deep in the stack, identify root causes, and derive effective techniques to achieve fault tolerance and mitigate stragglers. MegaScale achieves 55.2% Model FLOPs Utilization (MFU) when training a 175B LLM model on 12,288 GPUs, improving the MFU by 1.34x compared to Megatron-LM. We share our operational experience in identifying and fixing failures and stragglers. We hope by articulating the problems and sharing our experience from a systems perspective, this work can inspire future LLM systems research.
Ziheng Jiang,Haibin Lin,Yinmin Zhong,Qi Huang,Yangrui Chen,Zhi Zhang,Yanghua Peng,Xiang Li,Cong Xie,Shibiao Nong,Yulu Jia,Sun He,Hongmin Chen,Zhihao Bai,Qi Hou,Shipeng Yan,Ding Zhou,Yiyao Sheng,Zhuo Jiang,Haohan Xu,Haoran Wei,Zhang Zhang,Pengfei Nie,Leqi Zou,Sida Zhao,Liang Xiang,Zherui Liu,Zhe Li,Xiaoying Jia,Jianxi Ye,Xin Jin,Xin Liu
System Research
查看更多论文
技术应用
豆包 MarsCode
豆包 MarsCode 是基于豆包大模型打造的智能开发工具,提供 Cloud IDE 及 AI 编程助手两种使用形态,具备代码补全、智能问答、代码解释和代码修复等多项功能,节省开发时间,释放脑海中的创造力。 作为豆包代码模型的具体应用,豆包 MarsCode 支持智能识别当前编码任务相关的上下文信息,同时将代码理解、生成、优化、推荐、补全、审查等多维能力融为一体,无缝嵌入研发流程的各个环节,帮助开发者提升代码开发质量和效率。
Advanced development
Coding

热招岗位

机器学习训练框架研发工程师/专家-豆包大模型
北京/上海/深圳/杭州
社招
立即投递
机器学习系统推理引擎资深工程师/专家-豆包大模型
北京/上海/杭州
社招
立即投递
机器学习系统调度工程师/专家-豆包大模型
北京/上海/杭州
社招
立即投递
大模型推理存储系统工程师/专家-豆包大模型
北京/上海/深圳/杭州
社招
立即投递
AI异构计算优化工程师/专家-豆包大模型
北京/上海/深圳/杭州
社招
立即投递
机器学习系统研发实习生-豆包大模型
北京/上海/深圳/杭州
实习
立即投递