The Society makes grants both to individuals and to organisations in support of cultural and scientific activities which increase innovation, outreach and diversity in Leeds and its immediate area. It also supports local museums and galleries and publications relating to the city.

About the Society

The Leeds Philosophical and Literary Society, founded in 1819, is a charity that promotes interest in science, literature and the arts – in the city of Leeds and beyond. We have meetings, lectures, entertainments, publications and visits.

Artificial Intelligence – Part 2

What is ethical AI? A landscape of possibilities and pitfalls

Gabriela Arriagada Bruneau, University of Leeds


The development of AI and its integration into society has created an intimate relation between us and data-based technologies. In this talk Gabriela gave an overview of the different areas of concern of the Ethics of AI as a discipline, including topics like Machine Ethics, Data Ethics, and Robot Human Interaction. She went on to present some of the current challenges of integrating ethics into these disruptive technologies.

Gabriela warned us that this is a huge, fast-developing, complex and inter-disciplinary area. She could only sketch the landscape of artificial intelligence (AI) challenges we have to think about now, and raise questions about what could come next.

First came a ‘map’ setting out the usual way of breaking down the topic. (See video for the slide showing the categories. The talk was illustrated throughout with clear diagrams). Straight away we were drawn into considering some fundamental questions, such as:

How close can a machine get towards operating as a moral agent like a human? What if we manage to develop a sentient, rationale artificial intelligence with its own needs, volition and intentions? Are we at risk from our own creations? What moral status will be given to machines? How will robots be integrated into society? They’re getting better at doing tasks that have hitherto always been done by humans. So, just as there have been debates about incorporating animals into the moral sphere, we’re now faced with considering how robots might be related to humans if they have capacities suggesting that they should be accorded rights and expected to abide by responsibilities. This is whole sub-field in itself.

Algorithms and models present many difficulties. There were clear links to Netta’s earlier talk: we’ve created the algorithms, but once we set them running, we can’t follow every step and understand exactly how they’re operating. How can we then judge the output if the process isn’t completely transparent?

There are also ethical issues to do with how people’s data is used. Inevitable tension arises because we need data to make AI work, and the more detailed and complete the sources, the greater the likelihood of effective AI. But privacy and security concerns loom large. How to implement data-driven technologies in ways that uphold ethical principles that we value?

Ethical considerations go beyond asking ‘should we’? If we say yes in answer to this, then we need to make practical suggestions towards answering: ‘how’? Risks and types/levels of control have to be considered. There’s a wide spectrum of responsibilities. These are not new dilemmas – indeed, such dilemmas have been at the heart of philosophical enquiry for centuries – but they’re being played out in a new realm of AI. These are not just rarified theoretical ethical discussions; there are real implications.


One such area has been much in the news over the last few years. We know that algorithms have been used to manipulate opinions and actions and this affects our trust in people and systems. Gabriela drew attention to the phenomenon of ‘deep fakes’: creations or alterations of information and images deliberately intended to deceive. Acquiring sound knowledge is harder. Algorithms lead us down pathways and close off other possible pathways. These ‘filter bubbles’ prevent us from seeing options side by side, argued logically and dispassionately. Other people’s choices help to channel our own choices and therefore alter our perceptions of what’s happening. Acquiring sound knowledge becomes harder when there’s overload and manipulated information. Even educated and critically alert people can be taken in; more impressionable people, including children, can be completely deceived and even knowingly welcome being drawn down a channel. Product preferences are one thing; political and ethical ideas are more problematic.

This is a technology-driven version of what has always happened in human society but the scale and pace of the phenomenon is concerning. We think we’ve been empowered, but we’re being led rather than exercising choice. Truths are being inferred from false premises. Digital literacy principles are ever more necessary – an ‘anti-ignorance’ movement. There’s a difference between information readily accessed and the critical appraisal of different sources of knowledge to establish and refine proper understanding.


“Data do not speak for themselves. They need a context, they need a purpose, and that’s why design can be a game-changer”.

AI is often built on simplified and partial and or biased data, uncritically accepted for lack of anything more comprehensive and consciously quality-controlled. Discrimination and inequality is being replicated in the systems being created and the decisions and actions that follow on. Technical feasibility isn’t enough; the process of building systems needs to be informed by the ethical standpoint of the real world into which AI is being deployed and be done with awareness that values and laws aren’t uniform across the world and throughout all societies.

Human-robot interactions are a special topic of concern. There are strong arguments in favour of providing robots for various kinds of intimate interactions with humans – for sex, for care. But there are also counter-arguments. There is the practical challenge of creating appropriate functions but also the danger that successful robot functioning might lead to vulnerable people being liable to over-interpreting the nature of the interaction. Might those reliant on robots become more uncomfortable with real interactions?

Robots with the capacity to decide to kill are a particular fear and the debate is far from resolved.

There are no solid, settled answers to these extremely complex matters.

Gabriela Arriagada Bruneau is a PhD candidate by the Inter-disciplinary Ethics Applied Centre (IDEA) and the Leeds Institute for Data Analysis (LIDA) at the University of Leeds. MSc in Philosophy, University of Edinburgh. Director of Applied Ethics for the Think Tank “Thinking Network” (Pensar en red) in Chile. Her work is mostly focused on fairness, bias, explicability, and interpretability in Data Science and AI. She is also interested in gender discrimination and feminist approaches to understand these issues.

Graphic: head with computer-like motifs inside and linked to a silhouette head

Capgemini. Copyright © 2021 

You may also be interested in Part 1 of this mini series – see Past Events section for video. 

Other events you might be interested in...

Explore more