It’s vital for AI and machine learning to develop into constructive, helpful members of society. LearnerShape’s mission – to help learners find useful courses and learning pathways – means that we are especially responsible for getting this right. The tools we use need to provide fair recommendations, unbiased suggestions, and skill representations which don’t leave any learners behind.
The problems posed by unethical AI are clear. Systems benefit a privileged few while disproportionately under-serving marginalised communities. Algorithms that learn from historical human decisions will also reproduce biased and prejudicial outcomes they encounter. For example, facial detection technologies in recruitment have denied employment opportunities to people with darker skin and disabilities. Similarly, employee evaluation systems used for performance, upskilling, and promotion tend to undervalue diversity.
While the issues AI can cause are well-known, researchers and the tech community do not yet have a commonly agreed framework for measuring and addressing bias. To explore this problem space, LearnerShape is working closely with the Institute for Ethical Artificial Intelligence (IEAI) at Oxford Brookes University. The IEAI draws on a broad range of expertise from across the university, including technology, science, business, law and the social sciences. It offers hands-on, practical advice and support to organisations seeking to maximise the benefits of AI, and data analysis to organisations and their customers.
LearnerShape faces particular challenges which it’s very important to get right. For example, are we causing problems for beginner learners by recommending material which is too advanced? Where there is a choice between different, equally appropriate learning pathways, should every learner receive the same recommendations? Are we making any retraining or reskilling pathways unnecessarily hard?
This collaboration aims to design and test a pragmatic evaluation process that can be used to advise on the ethical impact of AI technology on end users. By working together, we hope to better understand and plan for the risks and opportunities that AI presents.
Dr Fintan Nagle, CTO, LearnerShape
Dr Selin Nugent,Institute for Ethical AI, Oxford Brookes University
Prof. Kevin Maynard,Institute for Ethical AI, Oxford Brookes University