AI, ethics, and my week with Wanda

Francis Norton spends a week with Wanda, exploring the intricacies of getting AI right when it comes to software development, and the importance of good communication.

AI, ethics, and my week with Wanda. An image by peshkov,, used to illustrate a Francis Norton article for ieDigital.

As a software developer, I tend to assume that if I do my job well, I will make life better for our clients and their users. But A Week with Wanda (my new virtual assistant) has highlighted ways in which the kind of logical optimisation that AI excels in can get things very wrong and potentially drop you into hot water with the regulator. It made me think about AI and ethics. But before we dig deeper, let me introduce you to Wanda.

I think it was the giggle that first warned me how much I was going to hate my new virtual assistant. Everything about Wanda’s style was subtly wrong, but not half as wrong (I eventually learned) as her actions. A Week With Wanda is in fact an interactive experience designed and created by London-based Joe Hall, designed “to get people thinking about a number of the major issues in AI today, all through a simple, funny, interactive story”.

Screenshot of A Week with Wanda mobile app, used to illustrate a Francis Norton article for ieDigital.

Over the course of a week, Wanda tried to save me money by locking me out of my bank account (only within the story – she takes none of these actions in real life because you don’t really connect any of your bank accounts!). She also offers to make money for me by selling my location data to various businesses and bodies (including the police) for $47, and by taking so many jobs on my behalf that I’ll never have to work again (nor will the people she took the jobs from), and by selling a porn video where my face and a celebrity’s are grafted seamlessly on to the two bodies. Alarmingly, Wanda also offered me better insurance based on my skin colour!

Screenshot of A Week with Wanda mobile app, used to illustrate a Francis Norton article for ieDigital.

The scary thing is that all these scenarios are extrapolated from existing events and trends, as Joe Hall explains – often, to be fair, in original contexts where the harm or discrimination is neither intended nor obvious. It’s a simulation of course, but also an insightful look at artificial intelligence and its ability to reflect society (for better or worse).

Getting AI right

Of course, many of these issues, like job replacement, will have to be resolved at national or political levels, and fed back to the industry in the form of standards and regulation. This has already happened with the ethical dimensions of accessibility and PII (Personally Identifiable Information) security, both of which are already built in to our ISO 9001 quality criteria.

As a small, highly professional software development company with good lines of communication between the code-front and our CTO, Clayton (who’s always up for a quick chat as he collects his morning coffee), we are aware of the dangers of the kind of reputational hazard that could be caused by inappropriate conversational style. Clayton regards socially acceptable ethics as being as much a part of software quality as reliability and performance, and this is our approach to how we build great software.

Subscribe to our newsletter

Top stories -

Press releases -

ieDigital launches game-changer mobile app for banks

In this global-first launch, the UK’s leading digital banking solutions provider, ieDigital is launching a brand new...

Read more
News Opinion -

Responsibilities of an executive sponsor in digital culture

As we look deeper into the role of an executive sponsor, here we are discussing seven significant responsibilities of an...

Read more
Banking Technology -

How to Run Better Digital Banking Projects: Taking the right...

“Can we unlearn the damaging approach to such projects that have too much scope, too much complexity and are designed...

Read more