Foundation
The Doubao Foundation Model team oversees the engineering framework, architectural design, and code generation for AI foundation models.
Main areas of focus
Engineering architecture
The tasks involve large-scale distributed training, high-performance inference, and integration with new hardware.
Model structure design
The focus is on training efficient models at a lower cost. This involves exploring sparse models like MoE and a more efficient Attention structure, while collaborating closely with the engineering team to optimize performance.
Code generation
The team focuses on improving the effectiveness of model codes from pre-training to reinforcement learning.
Research topics
AI foundation model architecture
Design efficient AI foundation model structures to achieve better results with minimal training and inference costs.
AI foundation model
Efficient
Large-scale training clusters
Study methods to improve the stability and Maximum Throughput Utilization (MFU) of super-large training clusters, including cross-cluster training techniques.
Large-scale
Stability
Inference parallelization solutions
Research solutions for memory access limitations during inference, focusing on multi-machine inference and various parallel inference schemes.
Inference
Parallel
Advanced structures and patterns
Explore the integration of next-generation computing systems to investigate advanced model structures, training methodologies, and inference techniques.
Computing systems
Advanced
Algorithm innovation
Investigate foundational algorithmic challenges in AI foundation models and identify opportunities for algorithmic innovation.
Algorithm
Innovation

Selected Papers

Dec 03, 2024
FullStack Bench: Evaluating LLMs as Full Stack Coders
As the capabilities of code large language models (LLMs) continue to expand, their applications across diverse code intelligence domains are rapidly increasing. However, most existing datasets only evaluate limited application domains. To address this gap, we have developed a comprehensive code evaluation dataset FullStack Bench focusing on full-stack programming, which encompasses a wide range of application domains (e.g., basic programming, dataanalysis, software engineering, mathematics, and machine learning). Besides, to assess multilingual programming capabilities, in FullStack Bench, we design real-world instructions and corresponding unit test cases from 16 widely-used programming languages to reflect real-world usage scenarios rather than simple translations. Moreover, we also release an effective code sandbox execution tool (i.e, SandboxFusion) supporting various programming languages and packages to evaluate the performance of our FullStack Bench efficiently. Comprehensive experimental results on our FullStack Bench demonstrate the necessity and effectiveness of out FullStack Bench and SandboxFusion.
Siyao Liu, He Zhu, Jerry Liu, Shulin Xin, Aoyan Li, Rui Long, Li Chen, Jack Yang, Jinxiang Xia, Z.Y, Peng, Shukai Liu, Zhaoxiang Zhang, Jing Mai, Ge Zhang, Wenhao Huang, Kai Shen, Liang Xiang
Code Intelligence
Oct 02, 2024
HybridFlow: A Flexible and Efficient RLHF Framework
Reinforcement Learning from Human Feedback (RLHF) is widely used in Large Language Model (LLM) alignment. Traditional RL can be modeled as a dataflow, where each node represents computation of a neural network (NN) and each edge denotes data dependencies between the NNs. RLHF complicates the dataflow by expanding each node into a distributed LLM training or generation program, and each edge into a many-to-many multicast. Traditional RL frameworks execute the dataflow using a single controller to instruct both intra-node computation and inter-node communication, which can be inefficient in RLHF due to large control dispatch overhead for distributed intra-node computation. Existing RLHF systems adopt a multi-controller paradigm, which can be inflexible due to nesting distributed computation and data communication. We propose HybridFlow, which combines single-controller and multi-controller paradigms in a hybrid manner to enable flexible representation and efficient execution of the RLHF dataflow. We carefully design a set of hierarchical APIs that decouple and encapsulate computation and data dependencies in the complex RLHF dataflow, allowing efficient operation orchestration to implement RLHF algorithms and flexible mapping of the computation onto various devices. We further design a 3D-HybridEngine for efficient actor model resharding between training and generation phases, with zero memory redundancy and significantly reduced communication overhead. Our experimental results demonstrate 1.53×~20.57× throughput improvement when running various RLHF algorithms using HybridFlow, as compared with state-of-the-art baselines. HybridFlow source code will be available at this https URL(https://github.com/volcengine/verl).
Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, Chuan Wu
Reinforcement Learning
System Research
Feb 23, 2024
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
We present the design, implementation and engineering experience in building and deploying MegaScale, a production system for training large language models (LLMs) at the scale of more than 10,000 GPUs. Training LLMs at this scale brings unprecedented challenges to training efficiency and stability. We take a full-stack approach that co-designs the algorithmic and system components across model block and optimizer design, computation and communication overlapping, operator optimization, data pipeline, and network performance tuning. Maintaining high efficiency throughout the training process (i.e., stability) is an important consideration in production given the long extent of LLM training jobs. Many hard stability issues only emerge at large scale, and in-depth observability is the key to address them. We develop a set of diagnosis tools to monitor system components and events deep in the stack, identify root causes, and derive effective techniques to achieve fault tolerance and mitigate stragglers. MegaScale achieves 55.2% Model FLOPs Utilization (MFU) when training a 175B LLM model on 12,288 GPUs, improving the MFU by 1.34x compared to Megatron-LM. We share our operational experience in identifying and fixing failures and stragglers. We hope by articulating the problems and sharing our experience from a systems perspective, this work can inspire future LLM systems research.
Ziheng Jiang,Haibin Lin,Yinmin Zhong,Qi Huang,Yangrui Chen,Zhi Zhang,Yanghua Peng,Xiang Li,Cong Xie,Shibiao Nong,Yulu Jia,Sun He,Hongmin Chen,Zhihao Bai,Qi Hou,Shipeng Yan,Ding Zhou,Yiyao Sheng,Zhuo Jiang,Haohan Xu,Haoran Wei,Zhang Zhang,Pengfei Nie,Leqi Zou,Sida Zhao,Liang Xiang,Zherui Liu,Zhe Li,Xiaoying Jia,Jianxi Ye,Xin Jin,Xin Liu
System Research
Learn More
Technical applications
MarsCode
MarsCode is an advanced development tool built on the large-scale Doubao model, offering two primary modes of operation: Cloud IDE and Al Programming Assistant. It provides features such as code completion, intelligent Q&A, code analysis, explanation, and bug fixing, utilizing Al-driven capabilities to boost programming intelligence and reduce development time. MarsCode intelligently detects the context of ongoing coding tasks and integrates a range of capabilities, including code understanding, generation, optimization, recommendations, completion, and review. These features are seamlessly embedded throughout the R&D process, helping developers improve both the quality and efficiency of their code.
Advanced development
Coding

Featured Jobs

Research Scientist in ML Systems
Seattle / San Jose
Experienced Hiring
Apply Now
Software Engineer, ML System Architecture
Seattle / San Jose
Experienced Hiring
Apply Now
Research Scientist, Applied Machine Learning
Seattle / San Jose
Campus Recruitment
Apply Now
Software Engineer in Machine Learning Systems
Seattle / San Jose
Campus Recruitment
Apply Now
Software Engineer Intern (Doubao (Seed) - Machine Learning System)
Seattle / San Jose
Internship
Apply Now
Research Scientist Intern (Doubao (Seed) - Machine Learning System)
Seattle / San Jose
Internship
Apply Now