About me
Qi She was born in Yangzhou (扬州), Jiangsu (江苏) province. He is now leading the Applied Algorithm (应用算法) department in Business Integrity at ByteDance, focusing on landing multi-modal Large Language Models, machine learning, and computer vision techniques in ByteDance AI products.
He previously worked as a Research Scientist at Intel Labs (2018-2020), studying statistical machine learning and deep learning with applications in computer vision and robotics.
Education
- Ph.D. in Machine Learning and Neural Computation
- City University of Hong Kong, Department of Electronic Engineering
- Advisor: Prof. Rosa H.M. Chan
- Achievement: 2nd place in the 10th Global Artificial Intelligence Hackathon (IBM Watson research).
- Collaboration: Worked with Prof. Guanrong Chen (Complex Networks) and Prof. James Kwok (Machine Learning).
- Visiting Student Research Collaborator (VSRC)
- Princeton University
- Advisor: Prof. Jonathan Pillow
- Focus: Latent subspace discovery from high-dimensional neural responses.
Research Overview
Qi has published more than 50 peer-reviewed papers in top-tier conferences and journals, including ICML, UAI, CVPR, ICCV, ICLR, AAAI, ICRA, BMVC, TPAMI, TNNLS, Artificial Intelligence, TSP, CSUR, etc.
- Service:
- Organizer: IROS 2019 Lifelong Robotic Vision Challenge
- Organizer: CVPR 2020 & 2021 Continual Learning in Computer Vision Workshop
- PC Member: ICONIP 2019
- Reviewer: NeurIPS, ICML, ICLR, CVPR, AAAI, IJCAI, ICONIP, TSP, CSUR, EJN, etc.
- Patents: 5 granted/filed US patents, 20 China patents.
Research Interests
Multi-Modal Large Language Model & LLM-based Agent
Focusing on multi-modal capabilities of Large Vision & Language Models, exploring the behavior/personality of LLM-based agents, and their applications.
Advanced Learning Methods
Continual Learning, Transfer Learning, Multi-task Learning, Meta-learning, Curriculum Learning, and Self-paced Learning. Aiming to improve data efficiency and formulating problems around training with multiple tasks or efficient sampling.
Dynamical Systems & RNNs
Modeling time series, from dynamical systems to deep RNNs. Optimizing structures and dependencies in dynamic/recurrent models.
Complex Network & Graph Neural Networks
Investigating the impact of graph-structured neural networks and applying neural models to graph-structured data.
Theory for Representation Learning
Understanding neural networks via deep generative models such as VAE, GAN, and normalizing flow models.
Statistical Machine Learning
Applying Bayesian theory to extract hidden structures from high-dimensional neural data and infer brain connectivity.