Skip to main content
Qiyuan Zhang

Dr Qiyuan Zhang

(he/him)

Lecturer in Human Factors

Overview

I'm a human factors researcher whose interests centre around human-machine systems with a key focus on human-robot, human-automation interations. I utilise theories and findings from the field of Cognitive Psychology and Social Psychology to tackle real-world problems concerning human interactions with AI, autonomous systems and smart agents, particularly in safety-critical contexts such as transportation, emergency services and cyber security. I'm also interested in the detrimental effects of noises on human performance in cognitive tasks.

Publication

2024

2023

2022

2021

2020

2019

2015

2014

Articles

Conferences

Research

Research Topics

Human Factors Psychology: Human-machine/robot interactions, trust in automation, human-centred design, explainable AI (XAI), auditory communication & sonification, interruptions & multitasking, cognitive workload, human error

Judgment & Decision Making: intuitive judgment, heuristics, cognitive biases, risk perception

Research Group

Human Factors Excellence Research Group (HuFEx)

Centre for Artificial Intelligence, Robotics and Human-Machine Systems (IROHMS)

Research Projects and Grants

2021              Human Vulnerability to Cyber Attack Attempts When Using Autonomous Vehicles - EPSRC Doctoral Training Hub in Cyber Security Analytics - Co-Applicant & Secondary Supervisor (~£90k)

2021               Autonomous shared transport: The role of social context in user perceptions of security and trust using immersive dynamic simulations - EPSRC Interdisciplinary Doctoral Training Hub in Sustainable Transport – Co-Applicant & Secondary Supervisor (~£90k)

2020–now     Rule of Law in the Age of AI: Distributive Principles of Legal Liability for Multi-Agent Societies – funded by ESRC-JST (~£800k)) for three years – Lead Research Associate and later Co-Investigator

2020              Rapid Internal Simulation of Knowledge (R.I.S.K.) – funded by IROHMS Accelerator Grant (£5k) – Co-Investigator – developing visualisation tool

2020              XAI & I – funded by IROHMS Accelerator Grant (£12k) - Co-Investigator – exploring whether human category learning can be facilitated by machine-learning algorithms

2019              Explainable AI (XAI) – funded by Airbus (£75k) – Research Associate

2018-2019    Human-Machine Interface in Emergency Operation Centres – funded by SOS Alarm in Sweden (£30k) – Lead Research Assistant – to improve procedural protocols and the human computer interfaces (HCIs) used in more than 20 emergency call centres in Sweden

2018-2019     Flourish Connected Autonomous Vehicles: Empowerment through trusted, secure mobility – funded by Innovate UK (£5.6M) – Research Assistant - to develop autonomous transportation and eco-systems for the mobility of older adults and people with disabilities

2012              Centre of Excellence for Stimulation of Regional Economy of Northeast – funded by Rural Development Programme for England (RDPE) (£140k) – Main applicant/Account Holder – A business project to build state-of-the-art equine facilities in the Northeast England

2012              Counterfactual-based Persuasive Messages in Risk Communications – funded by Institute of Hazard, Risk and Resilience (IHRR) of Durham University (£1.5k) – Primary Investigator – to investigate the effectiveness of near-misses versus real accidents in raising people’s awareness of hazardous situations.

2008              Uncertainties in Counterfactuals – Durham Academic Scholarship – PhD Studentship

Biography

I completed my PhD in Cognitive Psychology at Durham University, which investigated people’s intuitive judgments of probability and risk using counterfactual thinking (i.e., imaginations of what might have been). I then spent five years working as a R&D manager/Data Analyst in the sport industry, helping dressage and showjumping riders improve their skills and communication with the horses using data-driven methods with the assistance of riding computer simulation.

I joined Cardiff University in 2018 and re-launched my research career in the applied areas of human-machine interactions (HMI) and automation. As a member of the Human Factors Excellence Research Group (HuFEx) and the Centre for Artificial Intelligence, Robotics and Human-Machine Systems (IROHMS), I collaborated internationally with scholars and industrial organisations on multidisciplinary projects concerning human performance and wellbeing when interacting with AI-powered technologies in various application domains including intelligent transportation, emergency services and aviation. One example of these was funded by a Swedish emergency services company - SOS Alarm - which aimed to improve human computer interface design and the working procedure in their emergency call centres around Sweden. Another project (funded by Airbus) was on the topic of Explainable AI (XAI) and involved developing evaluation frameworks to enhance the interpretability of deep learning networks to human users. I also contributed to the Flourish connected autonomous vehicle (CAV) project (£5M, funded by Innovate UK) in which I worked with a team of multidisciplinary academics (Human Factors Psychologists, Engineers, Computer Scientists), industrial partners (e.g., Airbus) and
charities (e.g. AgeUK) – to develop and test human-machine interfaces for future autonomous vehicles designed for older adults and people with cognitive/physical impairments.

The Flourish project sparkled my research interest in autonomous driving because it is a great "test-bed" for AI-powered technology to be deployed in massive scale in safety-critical domains. I took the position of the leading Research Associate and later Co-Investigator in an ESRC-JST funded UK-Japan joint research project entitled “Rule of Law in the Age of AI: Distributive Principles of Legal Liability for Multi-Agent Societies”, in which Human Factors experts from Cardiff University (led by Prof Phil Morgan as UK PI) were teamed up with legal and robotics experts from the Universities of Kyoto, Osaka and Doshisha in Japan (led by Prof Tatsuhiko Inatani as Japan PI), to address one of the biggest challenges facing the proliferation of autonomous vehicles and other AI-powered autonomous systems – blame and liability distribution in the event of accidents. The Cardiff team were leading with the investigation of judgments of blame and trust by constructing new experimental paradigms and collecting data on various types of accident scenarios using textbased vignettes, animations and high fidelity computer simulations on the cutting edge Transport Simulator in the IROHMS Simulation Laboratory. Our findings have important implications to policy making, legislation and the design of autonomous vehicles

My future research will continue to focus on people’s relationships with AI, robots, automation and other smart agents in social and working environments. I will address questions like: how do humans perceive these smart agents in terms of intelligence and emotions? Would human apply theory of mind and develop empathy toward them? How would interacting with these agents affect human’s sense of self-identity (e.g., what makes a human human)? How would the design of the smart agents influence people's perception of them? I believe the answers to these questions are crucial to building a sustainable morden society that benefits from AI-powered technologies. 

 

Contact Details