ByteDance
Top Seed Talent Program
Top Seed Talent Program
/Tech enthusiasts with keen interests in areas such as machine learning, artificial intelligence, LLM, computer vision, and audio and video generation
/People who have a proven track record in related academic research, engineering, and open-source communities
/Essentially excellent, daring to innovate, and highly self-driven. We value the inherent brilliance in you more than your experience and previous achievements.
有志于投身机器学习、人工智能、大模型、计算机视觉、音视频生成等技术领域;
在相关学术研究、工程实践、开源社区等表现出色;
有志于投身机器学习、人工智能、大模型、计算机视觉、音视频生成等技术领域;
在相关学术研究、工程实践、开源社区等表现出色;
After joining ByteDance, he was a research scientist for video understanding in AILab before taking charge of the recommendation system. Liang joined AML in 2021 and is been at the helm of ByteDance's AML and Doubao Foundation teams now. He leads the charge in exploring foundational and advanced AI algorithms and engineering technologies, swiftly bringing a range of AI products from concept to market.
Mingxuan has participated in multiple open-source projects, including lightseq, mrasp, and others, which are widely adopted in the industry. He also serves as the regional chairman and sponsorship chairman for conferences such as NeurIPS, ACL, EMNLP, and more.
The Post-Training team's work includes instruction tuning, reward models, RLHF, RLAIF, and self-learning models. Meanwhile, the team is also spearheading a series of avant-garde and challenging studies in key areas such as the generalizability, interpretability, and authenticity of large language models.
Currently, he heads the Speech team at ByteDance Doubao, leading the team in core research and applications of technologies focused on multimodal generation and understanding.
Jiashi currently heads the Vision Research team at ByteDance Doubao, conducting cutting-edge research in visual and multimodal foundation models, AIGC, and 3D avatar/object reconstruction and generation, while also focusing on the wide application of these technologies. Before joining ByteDance, he was an assistant professor in the ECE department of the National University of Singapore, leading the Vision and Machine Learning Laboratory, and advising or co-advising nearly 20 doctoral students.
Zhi is currently a research scientist for Vision Generation Models at ByteDance Doubao, focusing on the development and comprehension of computer vision algorithms. He has authored multiple highly-cited papers presented at leading conferences and in prestigious journals, amassing more than 11,000 citations on Google Scholar.
He currently leads the Audio Generation Research team at ByteDance Doubao. With more than 100 research papers and patents, he pushed the frontier of multiple audio technologies and has made significant contributions in various areas including voice generation, recognition, and translation, voice separation and enhancement, speaker recognition and diarization, self-supervised learning for voice, multi-channel processing, as well as in the development of open-source voice data sets.
He served consecutively as the head of ByteDance's Toutiao Webpage Search team and TikTok Video Search technology team. He developed ByteDance's search system from scratch, making significant innovations and breakthroughs in ranking architecture and algorithms, multi-language, and multimodal relevancy. Thanks to his contributions, ByteDance was able to deliver the leading search experience for Chinese-based search. Currently, he leads the Large Language Model Pre-training team at ByteDance Doubao, focusing on areas including data cleaning/synthesis/blending, associative learning and curriculum learning, training algorithms, and scaling capability.
Wanjun is currently a researcher in the Doubao Seed LLM team at ByteDance, focusing on super alignment and inference of large language models. AGIEval, an LLM evaluation set she developed, is used for model evaluations by major mainstream companies.
After joining ByteDance, he was a research scientist for video understanding in AILab before taking charge of the recommendation system. Liang joined AML in 2021 and is been at the helm of ByteDance's AML and Doubao Foundation teams now. He leads the charge in exploring foundational and advanced AI algorithms and engineering technologies, swiftly bringing a range of AI products from concept to market.
Mingxuan has participated in multiple open-source projects, including lightseq, mrasp, and others, which are widely adopted in the industry. He also serves as the regional chairman and sponsorship chairman for conferences such as NeurIPS, ACL, EMNLP, and more.
The Post-Training team's work includes instruction tuning, reward models, RLHF, RLAIF, and self-learning models. Meanwhile, the team is also spearheading a series of avant-garde and challenging studies in key areas such as the generalizability, interpretability, and authenticity of large language models.
Currently, he heads the Speech team at ByteDance Doubao, leading the team in core research and applications of technologies focused on multimodal generation and understanding.
Jiashi currently heads the Vision Research team at ByteDance Doubao, conducting cutting-edge research in visual and multimodal foundation models, AIGC, and 3D avatar/object reconstruction and generation, while also focusing on the wide application of these technologies. Before joining ByteDance, he was an assistant professor in the ECE department of the National University of Singapore, leading the Vision and Machine Learning Laboratory, and advising or co-advising nearly 20 doctoral students.
Zhi is currently a research scientist for Vision Generation Models at ByteDance Doubao, focusing on the development and comprehension of computer vision algorithms. He has authored multiple highly-cited papers presented at leading conferences and in prestigious journals, amassing more than 11,000 citations on Google Scholar.
He currently leads the Audio Generation Research team at ByteDance Doubao. With more than 100 research papers and patents, he pushed the frontier of multiple audio technologies and has made significant contributions in various areas including voice generation, recognition, and translation, voice separation and enhancement, speaker recognition and diarization, self-supervised learning for voice, multi-channel processing, as well as in the development of open-source voice data sets.
He served consecutively as the head of ByteDance's Toutiao Webpage Search team and TikTok Video Search technology team. He developed ByteDance's search system from scratch, making significant innovations and breakthroughs in ranking architecture and algorithms, multi-language, and multimodal relevancy. Thanks to his contributions, ByteDance was able to deliver the leading search experience for Chinese-based search. Currently, he leads the Large Language Model Pre-training team at ByteDance Doubao, focusing on areas including data cleaning/synthesis/blending, associative learning and curriculum learning, training algorithms, and scaling capability.
Wanjun is currently a researcher in the Doubao Seed LLM team at ByteDance, focusing on super alignment and inference of large language models. AGIEval, an LLM evaluation set she developed, is used for model evaluations by major mainstream companies.
After joining ByteDance, he was a research scientist for video understanding in AILab before taking charge of the recommendation system. Liang joined AML in 2021 and is been at the helm of ByteDance's AML and Doubao Foundation teams now. He leads the charge in exploring foundational and advanced AI algorithms and engineering technologies, swiftly bringing a range of AI products from concept to market.
Mingxuan has participated in multiple open-source projects, including lightseq, mrasp, and others, which are widely adopted in the industry. He also serves as the regional chairman and sponsorship chairman for conferences such as NeurIPS, ACL, EMNLP, and more.
The Post-Training team's work includes instruction tuning, reward models, RLHF, RLAIF, and self-learning models. Meanwhile, the team is also spearheading a series of avant-garde and challenging studies in key areas such as the generalizability, interpretability, and authenticity of large language models.
Currently, he heads the Speech team at ByteDance Doubao, leading the team in core research and applications of technologies focused on multimodal generation and understanding.
Jiashi currently heads the Vision Research team at ByteDance Doubao, conducting cutting-edge research in visual and multimodal foundation models, AIGC, and 3D avatar/object reconstruction and generation, while also focusing on the wide application of these technologies. Before joining ByteDance, he was an assistant professor in the ECE department of the National University of Singapore, leading the Vision and Machine Learning Laboratory, and advising or co-advising nearly 20 doctoral students.
Zhi is currently a research scientist for Vision Generation Models at ByteDance Doubao, focusing on the development and comprehension of computer vision algorithms. He has authored multiple highly-cited papers presented at leading conferences and in prestigious journals, amassing more than 11,000 citations on Google Scholar.
He currently leads the Audio Generation Research team at ByteDance Doubao. With more than 100 research papers and patents, he pushed the frontier of multiple audio technologies and has made significant contributions in various areas including voice generation, recognition, and translation, voice separation and enhancement, speaker recognition and diarization, self-supervised learning for voice, multi-channel processing, as well as in the development of open-source voice data sets.
He served consecutively as the head of ByteDance's Toutiao Webpage Search team and TikTok Video Search technology team. He developed ByteDance's search system from scratch, making significant innovations and breakthroughs in ranking architecture and algorithms, multi-language, and multimodal relevancy. Thanks to his contributions, ByteDance was able to deliver the leading search experience for Chinese-based search. Currently, he leads the Large Language Model Pre-training team at ByteDance Doubao, focusing on areas including data cleaning/synthesis/blending, associative learning and curriculum learning, training algorithms, and scaling capability.
Wanjun is currently a researcher in the Doubao Seed LLM team at ByteDance, focusing on super alignment and inference of large language models. AGIEval, an LLM evaluation set she developed, is used for model evaluations by major mainstream companies.
Wanjun is currently a researcher in the Doubao Seed LLM team at ByteDance, focusing on super alignment and inference of large language models. AGIEval, an LLM evaluation set she developed, is used for model evaluations by major mainstream companies.
After joining ByteDance, he was a research scientist for video understanding in AILab before taking charge of the recommendation system. Liang joined AML in 2021 and is been at the helm of ByteDance's AML and Doubao Foundation teams now. He leads the charge in exploring foundational and advanced AI algorithms and engineering technologies, swiftly bringing a range of AI products from concept to market.
Mingxuan has participated in multiple open-source projects, including lightseq, mrasp, and others, which are widely adopted in the industry. He also serves as the regional chairman and sponsorship chairman for conferences such as NeurIPS, ACL, EMNLP, and more.
The Post-Training team's work includes instruction tuning, reward models, RLHF, RLAIF, and self-learning models. Meanwhile, the team is also spearheading a series of avant-garde and challenging studies in key areas such as the generalizability, interpretability, and authenticity of large language models.
Currently, he heads the Speech team at ByteDance Doubao, leading the team in core research and applications of technologies focused on multimodal generation and understanding.
Jiashi currently heads the Vision Research team at ByteDance Doubao, conducting cutting-edge research in visual and multimodal foundation models, AIGC, and 3D avatar/object reconstruction and generation, while also focusing on the wide application of these technologies. Before joining ByteDance, he was an assistant professor in the ECE department of the National University of Singapore, leading the Vision and Machine Learning Laboratory, and advising or co-advising nearly 20 doctoral students.
Zhi is currently a research scientist for Vision Generation Models at ByteDance Doubao, focusing on the development and comprehension of computer vision algorithms. He has authored multiple highly-cited papers presented at leading conferences and in prestigious journals, amassing more than 11,000 citations on Google Scholar.
He currently leads the Audio Generation Research team at ByteDance Doubao. With more than 100 research papers and patents, he pushed the frontier of multiple audio technologies and has made significant contributions in various areas including voice generation, recognition, and translation, voice separation and enhancement, speaker recognition and diarization, self-supervised learning for voice, multi-channel processing, as well as in the development of open-source voice data sets.
He served consecutively as the head of ByteDance's Toutiao Webpage Search team and TikTok Video Search technology team. He developed ByteDance's search system from scratch, making significant innovations and breakthroughs in ranking architecture and algorithms, multi-language, and multimodal relevancy. Thanks to his contributions, ByteDance was able to deliver the leading search experience for Chinese-based search. Currently, he leads the Large Language Model Pre-training team at ByteDance Doubao, focusing on areas including data cleaning/synthesis/blending, associative learning and curriculum learning, training algorithms, and scaling capability.
Wanjun is currently a researcher in the Doubao Seed LLM team at ByteDance, focusing on super alignment and inference of large language models. AGIEval, an LLM evaluation set she developed, is used for model evaluations by major mainstream companies.
After joining ByteDance, he was a research scientist for video understanding in AILab before taking charge of the recommendation system. Liang joined AML in 2021 and is been at the helm of ByteDance's AML and Doubao Foundation teams now. He leads the charge in exploring foundational and advanced AI algorithms and engineering technologies, swiftly bringing a range of AI products from concept to market.
Mingxuan has participated in multiple open-source projects, including lightseq, mrasp, and others, which are widely adopted in the industry. He also serves as the regional chairman and sponsorship chairman for conferences such as NeurIPS, ACL, EMNLP, and more.
The Post-Training team's work includes instruction tuning, reward models, RLHF, RLAIF, and self-learning models. Meanwhile, the team is also spearheading a series of avant-garde and challenging studies in key areas such as the generalizability, interpretability, and authenticity of large language models.
Currently, he heads the Speech team at ByteDance Doubao, leading the team in core research and applications of technologies focused on multimodal generation and understanding.
Jiashi currently heads the Vision Research team at ByteDance Doubao, conducting cutting-edge research in visual and multimodal foundation models, AIGC, and 3D avatar/object reconstruction and generation, while also focusing on the wide application of these technologies. Before joining ByteDance, he was an assistant professor in the ECE department of the National University of Singapore, leading the Vision and Machine Learning Laboratory, and advising or co-advising nearly 20 doctoral students.
Zhi is currently a research scientist for Vision Generation Models at ByteDance Doubao, focusing on the development and comprehension of computer vision algorithms. He has authored multiple highly-cited papers presented at leading conferences and in prestigious journals, amassing more than 11,000 citations on Google Scholar.
He currently leads the Audio Generation Research team at ByteDance Doubao. With more than 100 research papers and patents, he pushed the frontier of multiple audio technologies and has made significant contributions in various areas including voice generation, recognition, and translation, voice separation and enhancement, speaker recognition and diarization, self-supervised learning for voice, multi-channel processing, as well as in the development of open-source voice data sets.
He served consecutively as the head of ByteDance's Toutiao Webpage Search team and TikTok Video Search technology team. He developed ByteDance's search system from scratch, making significant innovations and breakthroughs in ranking architecture and algorithms, multi-language, and multimodal relevancy. Thanks to his contributions, ByteDance was able to deliver the leading search experience for Chinese-based search. Currently, he leads the Large Language Model Pre-training team at ByteDance Doubao, focusing on areas including data cleaning/synthesis/blending, associative learning and curriculum learning, training algorithms, and scaling capability.
Wanjun is currently a researcher in the Doubao Seed LLM team at ByteDance, focusing on super alignment and inference of large language models. AGIEval, an LLM evaluation set she developed, is used for model evaluations by major mainstream companies.
/ Ample computing power and datasets coupled with nimble team collaboration, enabling you to concentrate on technical research
/ Rich application scenarios that enable hundreds of millions of users to experience your findings
/ Abundant opportunities to explore key research areas, ensuring long-term goals are not compromised by short-term metrics
/ Large investment in AI foundation models and agile decision-making, helping ideas quickly come to fruition
/ Ample computing power and datasets coupled with nimble team collaboration, enabling you to concentrate on technical research
/ Rich application scenarios that enable hundreds of millions of users to experience your findings
/ Abundant opportunities to explore key research areas, ensuring long-term goals are not compromised by short-term metrics
/ Large investment in AI foundation models and agile decision-making, helping ideas quickly come to fruition
/ Experience full empowerment and trust that allow you to steer key research projects
/ Receive support from top mentors, who can help you identify the most critical and high-value research topics in the field
/ Get interviewed by mentors directly and participate in the long-term training program that lasts over a year, granting you the freedom to delve deeply into the research topic
/ Explore opportunities to communicate both internally and externally, exchanging insights on industry trends with leading experts
Highly competitive packages complemented by outstanding rewards
/ Top pay for top talents
/ Excellence is recognized and rewarded — your exceptional performance will never go unnoticed
/ Join us at any time and grow together with a company that's invested in your future