ByteDance
Top Seed Talent Program
Top Seed Talent Program
Jiashi currently heads the Vision Research team at ByteDance Doubao, conducting cutting-edge research in visual and multimodal foundation models, AIGC, and 3D avatar/object reconstruction and generation, while also focusing on the wide application of these technologies. Before joining ByteDance, he was an assistant professor in the ECE department of the National University of Singapore, leading the Vision and Machine Learning Laboratory, and advising or co-advising nearly 20 doctoral students.
After joining ByteDance, he was a research scientist for video understanding in AILab before taking charge of the recommendation system. Liang joined AML in 2021 and is been at the helm of ByteDance's AML and Doubao Foundation teams now. He leads the charge in exploring foundational and advanced AI algorithms and engineering technologies, swiftly bringing a range of AI products from concept to market.
Mingxuan has participated in multiple open-source projects, including lightseq, mrasp, and others, which are widely adopted in the industry. He also serves as the regional chairman and sponsorship chairman for conferences such as NeurIPS, ACL, EMNLP, and more.
The Alignment team's work includes instruction tuning, reward models, RLHF, RLAIF, and self-learning models. Meanwhile, the team is also spearheading a series of avant-garde and challenging studies in key areas such as the generalizability, interpretability, and authenticity of large language models.
Zhi is currently a research scientist for Vision Generation Models at ByteDance Doubao, focusing on the development and comprehension of computer vision algorithms. He has authored multiple highly-cited papers presented at leading conferences and in prestigious journals, amassing more than 11,000 citations on Google Scholar.
Currently, he heads the Speech team at ByteDance Doubao, leading the team in core research and applications of technologies focused on multimodal generation and understanding.
He currently leads the Audio Generation Research team at ByteDance Doubao. With more than 100 research papers and patents, he pushed the frontier of multiple audio technologies and has made significant contributions in various areas including voice generation, recognition, and translation, voice separation and enhancement, speaker recognition and diarization, self-supervised learning for voice, multi-channel processing, as well as in the development of open-source voice data sets.
He served consecutively as the head of ByteDance's Toutiao Webpage Search team and TikTok Video Search technology team. He developed ByteDance's search system from scratch, making significant innovations and breakthroughs in ranking architecture and algorithms, multi-language, and multimodal relevancy. Thanks to his contributions, ByteDance was able to deliver the leading search experience for Chinese-based search. Currently, he leads the Large Language Model Pre-training team at ByteDance Doubao, focusing on areas including data cleaning/synthesis/blending, associative learning and curriculum learning, training algorithms, and scaling capability.
Jiashi currently heads the Vision Research team at ByteDance Doubao, conducting cutting-edge research in visual and multimodal foundation models, AIGC, and 3D avatar/object reconstruction and generation, while also focusing on the wide application of these technologies. Before joining ByteDance, he was an assistant professor in the ECE department of the National University of Singapore, leading the Vision and Machine Learning Laboratory, and advising or co-advising nearly 20 doctoral students.
After joining ByteDance, he was a research scientist for video understanding in AILab before taking charge of the recommendation system. Liang joined AML in 2021 and is been at the helm of ByteDance's AML and Doubao Foundation teams now. He leads the charge in exploring foundational and advanced AI algorithms and engineering technologies, swiftly bringing a range of AI products from concept to market.
Mingxuan has participated in multiple open-source projects, including lightseq, mrasp, and others, which are widely adopted in the industry. He also serves as the regional chairman and sponsorship chairman for conferences such as NeurIPS, ACL, EMNLP, and more.
The Alignment team's work includes instruction tuning, reward models, RLHF, RLAIF, and self-learning models. Meanwhile, the team is also spearheading a series of avant-garde and challenging studies in key areas such as the generalizability, interpretability, and authenticity of large language models.
Zhi is currently a research scientist for Vision Generation Models at ByteDance Doubao, focusing on the development and comprehension of computer vision algorithms. He has authored multiple highly-cited papers presented at leading conferences and in prestigious journals, amassing more than 11,000 citations on Google Scholar.
Currently, he heads the Speech team at ByteDance Doubao, leading the team in core research and applications of technologies focused on multimodal generation and understanding.
He currently leads the Audio Generation Research team at ByteDance Doubao. With more than 100 research papers and patents, he pushed the frontier of multiple audio technologies and has made significant contributions in various areas including voice generation, recognition, and translation, voice separation and enhancement, speaker recognition and diarization, self-supervised learning for voice, multi-channel processing, as well as in the development of open-source voice data sets.
He served consecutively as the head of ByteDance's Toutiao Webpage Search team and TikTok Video Search technology team. He developed ByteDance's search system from scratch, making significant innovations and breakthroughs in ranking architecture and algorithms, multi-language, and multimodal relevancy. Thanks to his contributions, ByteDance was able to deliver the leading search experience for Chinese-based search. Currently, he leads the Large Language Model Pre-training team at ByteDance Doubao, focusing on areas including data cleaning/synthesis/blending, associative learning and curriculum learning, training algorithms, and scaling capability.
Jiashi currently heads the Vision Research team at ByteDance Doubao, conducting cutting-edge research in visual and multimodal foundation models, AIGC, and 3D avatar/object reconstruction and generation, while also focusing on the wide application of these technologies. Before joining ByteDance, he was an assistant professor in the ECE department of the National University of Singapore, leading the Vision and Machine Learning Laboratory, and advising or co-advising nearly 20 doctoral students.
After joining ByteDance, he was a research scientist for video understanding in AILab before taking charge of the recommendation system. Liang joined AML in 2021 and is been at the helm of ByteDance's AML and Doubao Foundation teams now. He leads the charge in exploring foundational and advanced AI algorithms and engineering technologies, swiftly bringing a range of AI products from concept to market.
Mingxuan has participated in multiple open-source projects, including lightseq, mrasp, and others, which are widely adopted in the industry. He also serves as the regional chairman and sponsorship chairman for conferences such as NeurIPS, ACL, EMNLP, and more.
The Alignment team's work includes instruction tuning, reward models, RLHF, RLAIF, and self-learning models. Meanwhile, the team is also spearheading a series of avant-garde and challenging studies in key areas such as the generalizability, interpretability, and authenticity of large language models.
Zhi is currently a research scientist for Vision Generation Models at ByteDance Doubao, focusing on the development and comprehension of computer vision algorithms. He has authored multiple highly-cited papers presented at leading conferences and in prestigious journals, amassing more than 11,000 citations on Google Scholar.
Currently, he heads the Speech team at ByteDance Doubao, leading the team in core research and applications of technologies focused on multimodal generation and understanding.
He currently leads the Audio Generation Research team at ByteDance Doubao. With more than 100 research papers and patents, he pushed the frontier of multiple audio technologies and has made significant contributions in various areas including voice generation, recognition, and translation, voice separation and enhancement, speaker recognition and diarization, self-supervised learning for voice, multi-channel processing, as well as in the development of open-source voice data sets.
He served consecutively as the head of ByteDance's Toutiao Webpage Search team and TikTok Video Search technology team. He developed ByteDance's search system from scratch, making significant innovations and breakthroughs in ranking architecture and algorithms, multi-language, and multimodal relevancy. Thanks to his contributions, ByteDance was able to deliver the leading search experience for Chinese-based search. Currently, he leads the Large Language Model Pre-training team at ByteDance Doubao, focusing on areas including data cleaning/synthesis/blending, associative learning and curriculum learning, training algorithms, and scaling capability.
Jiashi currently heads the Vision Research team at ByteDance Doubao, conducting cutting-edge research in visual and multimodal foundation models, AIGC, and 3D avatar/object reconstruction and generation, while also focusing on the wide application of these technologies. Before joining ByteDance, he was an assistant professor in the ECE department of the National University of Singapore, leading the Vision and Machine Learning Laboratory, and advising or co-advising nearly 20 doctoral students.
After joining ByteDance, he was a research scientist for video understanding in AILab before taking charge of the recommendation system. Liang joined AML in 2021 and is been at the helm of ByteDance's AML and Doubao Foundation teams now. He leads the charge in exploring foundational and advanced AI algorithms and engineering technologies, swiftly bringing a range of AI products from concept to market.
Mingxuan has participated in multiple open-source projects, including lightseq, mrasp, and others, which are widely adopted in the industry. He also serves as the regional chairman and sponsorship chairman for conferences such as NeurIPS, ACL, EMNLP, and more.
The Alignment team's work includes instruction tuning, reward models, RLHF, RLAIF, and self-learning models. Meanwhile, the team is also spearheading a series of avant-garde and challenging studies in key areas such as the generalizability, interpretability, and authenticity of large language models.
Zhi is currently a research scientist for Vision Generation Models at ByteDance Doubao, focusing on the development and comprehension of computer vision algorithms. He has authored multiple highly-cited papers presented at leading conferences and in prestigious journals, amassing more than 11,000 citations on Google Scholar.
Currently, he heads the Speech team at ByteDance Doubao, leading the team in core research and applications of technologies focused on multimodal generation and understanding.
He currently leads the Audio Generation Research team at ByteDance Doubao. With more than 100 research papers and patents, he pushed the frontier of multiple audio technologies and has made significant contributions in various areas including voice generation, recognition, and translation, voice separation and enhancement, speaker recognition and diarization, self-supervised learning for voice, multi-channel processing, as well as in the development of open-source voice data sets.
He served consecutively as the head of ByteDance's Toutiao Webpage Search team and TikTok Video Search technology team. He developed ByteDance's search system from scratch, making significant innovations and breakthroughs in ranking architecture and algorithms, multi-language, and multimodal relevancy. Thanks to his contributions, ByteDance was able to deliver the leading search experience for Chinese-based search. Currently, he leads the Large Language Model Pre-training team at ByteDance Doubao, focusing on areas including data cleaning/synthesis/blending, associative learning and curriculum learning, training algorithms, and scaling capability.
/ Access to ample computing power, vast datasets, and rich application scenarios
/ Deeply engage in top-tier technical challenges and projects, where the possibilities for exploration and innovation are boundless
/ Opportunities to communicate both internally and externally, exchanging insights on industry trends with leading experts
/ Our best-in-class mentor team establishes tailored goals and plans to continuously fuel your growth
/ Access to ample computing power, vast datasets, and rich application scenarios
/ Deeply engage in top-tier technical challenges and projects, where the possibilities for exploration and innovation are boundless
/ Opportunities to communicate both internally and externally, exchanging insights on industry trends with leading experts
/ Our best-in-class mentor team establishes tailored goals and plans to continuously fuel your growth
Highly competitive packages complemented by outstanding rewards
/ Top pay for top talents
/ Excellence is recognized and rewarded — your exceptional performance will never go unnoticed
/ Join us at any time and grow together with a company that's invested in your future