Facebook AI Infrastructure Director Yangqing Jia is leaving his position with the company, a person familiar with the matter told Synced. The Facebook team confirmed his departure yesterday.
Jia joined Facebook in 2016, and led a team of researchers and engineers building the general-purpose, large-scale AI platform that serves as the backbone for Facebook AI products, encompassing ranking, computer vision, natural language processing, speech recognition, mobile AI, and AR.
Jia is also the main developer of Caffe & Caffe 2, machine learning frameworks providing state-of-the-art deep learning algorithms and a collection of reference models for researchers and scientists. Last year, Facebook merged Caffe2 with PyTorch, the company’s flagship open source machine learning framework.
China’s e-commerce & cloud computing giant Alibaba is rumored to be Jia’s next destination. Alibaba lacks an efficient and easy-to-use deep learning platform like Google’s TensorFlow or Baidu’s PaddlePaddle. It’s believed that with his expertise, Jia could make significant contributions in this area. Media representatives from the DAMO Academy, Alibaba’s research institute, told Synced that they have not yet received any HR notification regarding Jia possibly landing there.
Jia was born in scenic Shaoxing city in China’s eastern Zhejiang province. He apprenticed with Prof. Trevor Darrell in his graduate study at UC Berkeley and has worked at the National University of Singapore, Microsoft Research Asia, NEC Labs America, and Google Research. He earned his bachelor’s and master’s degrees at the elite Tsinghua University.
In the last year of his PhD study Jia launched an after-school project to help him train and deploy deep learning models more efficiently. This was at the time that deep learning had started to gain traction, but there were still few toolboxes that offered off-the-shelf deployment for deep learning models.
In 2014, Jia published Convolutional Architecture for Fast Feature Embedding. “Caffe” soon become one of the most popular deep learning frameworks for vastly accelerating model training time, and Jia open-sourced it to benefit the wider machine learning community. “Prior to Caffe, the deep learning field lacked a framework that fully open sources all the codes, algorithms, and details. Many researchers and doctoral students (like me) had to reproduce the same algorithm over and over again, which is not good. I feel that as a researcher, I shall have an open mind to help the development of the entire community,” Jia said.
Jia joined Google Brain in 2014, where he conducted state-of-the-art deep learning research and engineering. He co-authored the paper on the Inception architecture and the GoogLeNet model, which won the ImageNet competition in 2014. He also contributed to the Google’s machine learning framework TensorFlow.
In 2017, Jia improved on his earlier creation with Caffe 2, one of the first deep learning frameworks to provide high-performance AI capability across all major platforms.
Most recently, Jia has led Caffe2 and ONNX efforts. ONNX creates industry-wide open neural network standards that can improve collaboration for major players such as Facebook, Microsoft, Amazon, Qualcomm, ARM, and many more.
Journalist: Tony Peng | Editor: Michael Sarazen