2026
Peters, Dorian; Hollanek, Tomasz; Ahmadpour, Naseem; Calvo, Rafael A; Chivukula, Sai Shruthi; Dindler, Christian; Gray, Colin M; Lazem, Shaimaa; Öz, Gizem; Piet, Nadia
Ethics at the Front-End: Responsible User-Facing Design for AI Systems Proceedings Article
In: Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems (CHI EA '26), Association for Computing Machinery, 2026.
Abstract | Links | BibTeX | Tags: Artificial Intelligence, Ethics and Values, Legal and Policy Perspectives, Regulation, UX Practice
@inproceedings{Peters2026-akb,
title = {Ethics at the Front-End: Responsible User-Facing Design for AI Systems},
author = {Dorian Peters and Tomasz Hollanek and Naseem Ahmadpour and Rafael A Calvo and Sai Shruthi Chivukula and Christian Dindler and Colin M Gray and Shaimaa Lazem and Gizem Öz and Nadia Piet},
url = {http://dx.doi.org/10.1145/3772363.3778769},
doi = {10.1145/3772363.3778769},
year = {2026},
date = {2026-04-01},
urldate = {2026-04-01},
booktitle = {Extended Abstracts of the 2026 CHI Conference on Human Factors in
Computing Systems (CHI EA '26)},
publisher = {Association for Computing Machinery},
abstract = {AI ethics discourse typically centers on the design of algorithms
and back-end systems, but the design of what users experience at
the ‘front-end’ of AI systems also engages with values-laden
ethical decisions. Deceptive patterns, distorted data
visualization, and exclusionary interfaces represent unethical
practices within the purview of interface, interaction and user
experience design. In this workshop we seek to better understand
and articulate what we believe to be an under-valued area of AI
Ethics: front-end de- sign. Through a combination of
cross-disciplinary and collaborative hands-on activities, we aim
to map a landscape of ethical front-end design for AI including
an initial round-up of critical issues for practice, policy
implications, and pressing areas for future research. The
workshop will host a keynote by human-centered AI trailblazer,
Ben Shneiderman and follow-up will include a written synthesis of
outcomes for sharing with the HCI community.},
keywords = {Artificial Intelligence, Ethics and Values, Legal and Policy Perspectives, Regulation, UX Practice},
pubstate = {published},
tppubtype = {inproceedings}
}
AI ethics discourse typically centers on the design of algorithms
and back-end systems, but the design of what users experience at
the ‘front-end’ of AI systems also engages with values-laden
ethical decisions. Deceptive patterns, distorted data
visualization, and exclusionary interfaces represent unethical
practices within the purview of interface, interaction and user
experience design. In this workshop we seek to better understand
and articulate what we believe to be an under-valued area of AI
Ethics: front-end de- sign. Through a combination of
cross-disciplinary and collaborative hands-on activities, we aim
to map a landscape of ethical front-end design for AI including
an initial round-up of critical issues for practice, policy
implications, and pressing areas for future research. The
workshop will host a keynote by human-centered AI trailblazer,
Ben Shneiderman and follow-up will include a written synthesis of
outcomes for sharing with the HCI community.
and back-end systems, but the design of what users experience at
the ‘front-end’ of AI systems also engages with values-laden
ethical decisions. Deceptive patterns, distorted data
visualization, and exclusionary interfaces represent unethical
practices within the purview of interface, interaction and user
experience design. In this workshop we seek to better understand
and articulate what we believe to be an under-valued area of AI
Ethics: front-end de- sign. Through a combination of
cross-disciplinary and collaborative hands-on activities, we aim
to map a landscape of ethical front-end design for AI including
an initial round-up of critical issues for practice, policy
implications, and pressing areas for future research. The
workshop will host a keynote by human-centered AI trailblazer,
Ben Shneiderman and follow-up will include a written synthesis of
outcomes for sharing with the HCI community.