AI, ethics, and my week with Wanda

Francis Norton spends a week with Wanda, exploring the intricacies of getting AI right when it comes to software development, and the importance of good communication.

AI, ethics, and my week with Wanda. An image by peshkov, iStockphoto.com, used to illustrate a Francis Norton article for ieDigital.

As a software developer, I tend to assume that if I do my job well, I will make life better for our clients and their users. But A Week with Wanda (my new virtual assistant) has highlighted ways in which the kind of logical optimisation that AI excels in can get things very wrong and potentially drop you into hot water with the regulator. It made me think about AI and ethics. But before we dig deeper, let me introduce you to Wanda.

I think it was the giggle that first warned me how much I was going to hate my new virtual assistant. Everything about Wanda’s style was subtly wrong, but not half as wrong (I eventually learned) as her actions. A Week With Wanda is in fact an interactive experience designed and created by London-based Joe Hall, designed “to get people thinking about a number of the major issues in AI today, all through a simple, funny, interactive story”.

Screenshot of A Week with Wanda mobile app, used to illustrate a Francis Norton article for ieDigital.

Over the course of a week, Wanda tried to save me money by locking me out of my bank account (only within the story – she takes none of these actions in real life because you don’t really connect any of your bank accounts!). She also offers to make money for me by selling my location data to various businesses and bodies (including the police) for $47, and by taking so many jobs on my behalf that I’ll never have to work again (nor will the people she took the jobs from), and by selling a porn video where my face and a celebrity’s are grafted seamlessly on to the two bodies. Alarmingly, Wanda also offered me better insurance based on my skin colour!

Screenshot of A Week with Wanda mobile app, used to illustrate a Francis Norton article for ieDigital.

The scary thing is that all these scenarios are extrapolated from existing events and trends, as Joe Hall explains – often, to be fair, in original contexts where the harm or discrimination is neither intended nor obvious. It’s a simulation of course, but also an insightful look at artificial intelligence and its ability to reflect society (for better or worse).

Getting AI right

Of course, many of these issues, like job replacement, will have to be resolved at national or political levels, and fed back to the industry in the form of standards and regulation. This has already happened with the ethical dimensions of accessibility and PII (Personally Identifiable Information) security, both of which are already built in to our ISO 9001 quality criteria.

As a small, highly professional software development company with good lines of communication between the code-front and our CTO, Clayton (who’s always up for a quick chat as he collects his morning coffee), we are aware of the dangers of the kind of reputational hazard that could be caused by inappropriate conversational style. Clayton regards socially acceptable ethics as being as much a part of software quality as reliability and performance, and this is our approach to how we build great software.

Subscribe to our newsletter

Top stories -

Debt Collections Technology -

Mental health and financial services, with Merlyn Holkar

On World Mental Health Day 2019, Merlyn Holkar from Money and Mental Health was interviewed by Shaun Weston about...

Read more
Podcast -

Podcast ep 08: Tackling mental health issues in financial...

On World Mental Health Day 2019, Merlyn Holkar from Money and Mental Health talks to Shaun Weston about challenges in...

Read more
Banking Technology -

Open banking, and the importance of data transparency

FreeAgent CEO Ed Molyneux joins Shaun Weston to talk about open banking, data transparency, and SaaS for small...

Read more