Welcome

I am an Assistant Professor in the School of Computing at Utah State University. My research is rooted in responsible AI, with a focus on building machine learning systems that are effective, explainable, fair, and robust in high-stakes settings. Much of this work centers on trustworthy anomaly detection, where the goal is not only to identify unusual behavior accurately, but also to make AI decisions more transparent, reliable, and ethically deployable.

A major domain of my research is AI for cybersecurity, particularly the detection of insider threats and other malicious behaviors in cyberspace. By developing advanced machine learning models, this work aims to address practical challenges in real-world cyber defense, such as limited labels, noisy data, class imbalance, and adaptive adversaries. The goal is to make AI-based threat detection more practical and dependable in real-world cyber defense settings.

More recently, I have been exploring AI security, with an emphasis on vulnerabilities such as backdoor attacks and adversarial attacks in anomaly detection models and language models, as well as defense mechanisms for improving model robustness. In this way, my research looks at both sides of the problem, using AI to strengthen security and making AI systems themselves more secure.

Another focus is the use of AI for IT operations, especially in the analysis of system metric data for fault detection, diagnosis, and mitigation. This line of work includes models such as LogBERT and LogGPT, along with methods for cross-system anomaly detection, root-cause analysis, and anomaly mitigation. Together, these efforts aim to make AI systems more useful and interpretable for maintaining complex computing infrastructure.

I received my PhD in Computer Science from Tongji University in 2017 and then worked as a postdoctoral researcher at the University of Arkansas from 2018 to 2019.