首页Top Seed研究进展Blog加入我们
banner
豆包大模型团队
字节跳动豆包大模型团队成立于 2023 年,致力于开发业界最先进的 AI 大模型技术,成为世界一流的研究团队,为科技和社会发展作出贡献。
最新动态
学术合作
字节跳动 “豆包大模型基金” 开启申报
邀请青年学者,一起探索 AI 大模型前沿课题
2024-06-12
了解详情
研究成果
Computer Vision
Make Pixels Dance: High-Dynamic Video Generation
Yan Zeng,Guoqiang Wei,Jiani Zheng,Jiaxin Zou,Yang Wei,Yuchen Zhang,Hang Li

2023-11-18

Speech&Audio

Seed-TTS: A Family of High-Quality Versatile Speech Generation Models

Philip Anastassiou, Jiawei Chen, Jitong Chen, Yuanzhe Chen, Zhuo Chen, Ziyi Chen, Jian Cong, Lelai Deng, Chuang Ding, Lu Gao, Mingqing Gong, Peisong Huang, Qingqing Huang, Zhiying Huang, Yuanyuan Huo, Dongya Jia, Chumin Li, Feiya Li, Hui Li, Jiaxin Li, Xiaoyang Li, Xingxing Li, Lin Liu, Shouda Liu, Sichao Liu, Xudong Liu, Yuchen Liu, Zhengxi Liu, Lu Lu, Junjie Pan, Xin Wang, Yuping Wang, Yuxuan Wang, Zhen Wei, Jian Wu, Chao Yao, Yifeng Yang, Yuanhao Yi, Junteng Zhang, Qidi Zhang, Shuo Zhang, Wenjie Zhang, Yang Zhang, Zilin Zhao, Dejian Zhong, Xiaobin Zhuang

2024-06-04

Responsible AI

Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment

Yang Liu,Yuanshun Yao,Jean-Francois Ton,Xiaoying Zhang,Ruocheng Guo,Hao Cheng,Yegor Klochkov,Muhammad Faaiz Taufiq,Hang Li

2024-03-21

Computer Vision

SDXL-Lightning: Progressive Adversarial Diffusion Distillation

Shanchuan Lin,Anran Wang,Xiao Yang

2024-03-02

NLP

Diffusion Glancing Transformer for Parallel Sequence to Sequence Learning

Lihua Qian,Mingxuan Wang,Yang Liu,Hao Zhou

2023-11-29

System Research

MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs

Ziheng Jiang,Haibin Lin,Yinmin Zhong,Qi Huang,Yangrui Chen,Zhi Zhang,Yanghua Peng,Xiang Li,Cong Xie,Shibiao Nong,Yulu Jia,Sun He,Hongmin Chen,Zhihao Bai,Qi Hou,Shipeng Yan,Ding Zhou,Yiyao Sheng,Zhuo Jiang,Haohan Xu,Haoran Wei,Zhang Zhang,Pengfei Nie,Leqi Zou,Sida Zhao,Liang Xiang,Zherui Liu,Zhe Li,Xiaoying Jia,Jianxi Ye,Xin Jin,Xin Liu

2024-02-23

Computer Vision

PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning

Lin Xu, Yilin Zhao, Daquan Zhou, Zhijie Lin, See Kiong Ng, Jiashi Feng

2024-04-29

模型家族
豆包通用模型 Pro
支持128K长文本,全系列可精调,具备更强的理解、生成、逻辑等综合能力
豆包通用模型 Lite
轻量版模型,对比专业版提供更低token成本、更低延迟
豆包·角色扮演模型
个性化的角色创作能力,更强的上下文感知和剧情推动能力
技术落地
larkvolcdoubaobutterflyhualudreamina