I am a dedicated AI researcher passionate about building trustworthy AI systems that can make the future better. I am currently pursuing my PhD (2023–2027, expected) under the joint supervision of Prof. Elliott Ash and Prof. Mrinmaya Sachan at ETH Zurich, and Prof. Markus Leippold at University of Zurich. I divide my time equally between both institutions.

My research centers on AI application trustworthiness, with a focus on two complementary directions:

  • External Monitoring – Developing efficient external verifiers to validate AI reasoning and outputs, and improving AI application design (e.g., output formats, user interfaces, and visualizations) to facilitate human and machine oversight.
  • Internal or Self-Monitoring – Enabling AI systems to recognize when they may be correct or mistaken. This includes fine-tuning models to acknowledge their knowledge limits and express uncertainty, as well as designing uncertainty quantification methods that capture LLM uncertainty without interrupting generation.

By advancing these directions, I aim to strengthen the reliability of AI in high-impact areas such as NLP for climate change and Legal NLP. I like to aim high while keeping my feet firmly on the ground.

If you are interested in collaborating, feel free to reach out!

Jingwei Ni

PhD Student in NLP

Work Email: jingni@ethz.ch

Personal Email: jingweini8@gmail.com
ETH Zürich
IFW-E-44, Haldeneggsteig 4

Projects and Community Services

Publications

Education

PhD @ ETH D-GESS

ETH & UZH (Mar 2023 – Present)

MSc in Data Science and Machine Learning

University College London (Sept 2021 – Sept 2022)

BSc in Computer Science

University of Hong Kong (Sep 2017 – Sep 2021)

Plain Academic