About me
Qi She was born in Yangzhou (扬州), Jiangsu (江苏) province. He is now leading Applied Algorithm (应用算法) department in Business Integrity at Bytedance, focusing on landing multi-modal Large Language Model, machine learning, computer vision techniques in Bytedance AI products. He used to be a Research Scientist at Intel Labs from 2018 to 2020, studying statistical machine learning, deep learning with applications in computer vision, and robotics.
Education
He obtained Ph.D. in machine learning and neural computation from the Department of Electronic Engineering (now Electrical Engineering) at the City University of Hong Kong where he was advised by Prof. Rosa H.M. Chan. During this period, He won the 2nd place in the 10th Global Artificial Intelligence Hackathon funded by IBM Watson research. He is also closely collaborated with Prof. Guanrong Chen and Prof. James Kwok, in complex networks and machine learning respectively. He used to be a fully-funded Visiting Student Research Collaborator (VSRC) at Princeton University, advised by Prof. Jonathan Pillow, studying latent subspace discovery from high-dimensional neural responses.
Research
Qi has more than 50 peer-reviewed publications, including ICML, UAI, CVPR, ICCV, ICLR, AAAI, ICRA, BMVC, TPAMI,TNNLS,Artificial intelligence, TSP, CSUR etc. He was the organizer of IROS 2019 Lifelong Robotic Vision Challenge and the organizer of the CVPR 2020 & 2021 Continual Learning in Computer Vision Workshop, also works as the PC member of ICONIP 2019 and serves as a reviewer for prestigious conferences and journals including NeurIPS, ICML, ICLR, CVPR, AAAI, IJCAI, ICONIP, TSP, CSUR, EJN etc. He holds 5 granted/filed US patents, and 20 China patents. The work is more related to developing a continual learning framework/toolkit benefiting quickly prototyping the continual/few-shot/meta-learning applications.
Specifically, his research interests include topics as:
Multi Modal Large Language Model & LLM-based Agent
- Focusing on multi-modal capabilities of Large Vision & Language Models, and exploring the behavior and personality of LLM-based agents, and the insights they offer In the current era.
Advanced Learning Method
- Continual Learning, Transfer Learning, Multi-task learning, Meta-learning, Curriculum Learning and Self-paced Learning – With the explicit goal of improving data efficiency, he has been working on multiple problems formulated around training with multiple tasks or efficient sampling.
Dynamical Systems & RNNs
- He was also interested in modeling time series, starting from dynamical systems to deep RNNs. This includes altering the optimizer and adding several regularizers to structure the hidden states and dependences of the dynamic/recurrent models.
Complex Network & Graph Neural Networks
- Adding meaningful structure to neural networks is definitely an important future direction that we need to understand. He has looked at the impact of graph structured neural networks or how to apply neural models to graph structured data.
Theory for Representation Learning
- He is interested in understanding how neural networks work via utilizing deep generative models, such as VAE, GAN, and normalizing flow models.
Statistical Machine Learning
- Qi was fascinated by Bayesian theory and used this tool for extracting hidden structure from high-dimensional neural data, inferring brain connectivity. Studying how information is encoded, decoded, and processed in the brain is one of his interests.