Sitemap

Raising good Robots

How I used my grandmother’s teachings to foster tomorrow’s AI

5 min readDec 6, 2018

--

KITT — “KARR doesn’t have my programming to protect human life”
Michael Knight — “That’s what I’m counting on buddy”

I was 9 when I watched a rerun of this episode in the sleepy Zimbabwean city of Bulawayo, so distant from the future-forward notions it presented. Having just only recently grasped the concept of good vs evil myself, it was interesting to me how either side of this binary was equally a reflection of self and society. In this scene above, it is clear that both AI-s have a predetermined cultural disposition, each value system being of course suited to the purpose of their existence. Values imposed by man.

Fast forward to 2018 and we’re not too far from this ethics-imposition conundrum. For starters, it is a well documented fact that Artificial Intelligence has a diversity problem. The effects thereof range from sexist natural language processing, racially skewed facial recognition software and more alarmingly: software used to predict future criminals showing bias against black people.

What are the implications of this?

It’s hard to tell yet but they are far reaching.

Press enter or click to view image in full size
A robot stands near luggage Kyoto, Japan ( image via Unsplash)

Google, one of the largest tech companies and proprietor of the most popular mobile phone operating system in the world, this year placed AI at the heart of their offering. What this means is that the potential force multiplication of bad AI is at an all time high as tech companies continue to lead efforts to democratise the technology, making complex, sometimes compromised algorithms available to everyone.

Glancing at a slightly more dystopian future, ethically imbalanced AI presents huge problems for autonomous weapons, especially considering that computer vision methodologies struggle with identifying people of color. Self-driving cars need a variation of the same technology to recognize humans and objects and we will ultimately see the human vs machine vs human prioritization dilemma presented by the Knight Rider scene above.
Taking a darker turn, autonomous vehicles faced with collision parity will have microseconds within which to choose between prioritizing a spectrum of victims, who do they choose and why? More importantly, who makes the choice?

Why AI needs a value system

While this is of course easier said than done, Artificial Intelligence requires a culture imposition that goes beyond the preferences of a handful of data scientists and developers.
In a 2018 survey conducted by Stackoverflow, nearly 50% of the surveyed developers believe that the people creating AI should be responsible for considering the ramifications of the technology. The idea of developer ownership would be great except for the fact that of the 60 000+ developers interviewed 93% are male, 74% are white, and 93% identified as heterosexual. Thus leaving us with the bias problem we started with.
It then goes without saying that the values that inform AI decision-making should come from a neutral source that understands the value of all life.

Value systems have limitations.

Values imbued with societal principles are bound to carry a legacy of prejudices. And while company-led initiatives are a step in the right direction, the inevitability of profit outcomes will always loom over otherwise noble steps such as Google’s AI principles.

I’ve dabbled with Artificial Intelligence before and have always been a proponent of its use to advance mankind and improve our lives. I have no doubt that the singularity will be reached in my lifetime and as such, I worry about what values the software living beside us will carry.
I began research on how a value system would be imposed on a robot. The first thing I discovered was that for all the charm that machine learning presents, ethics need to be a static concept. Values have to come from a source that is unchanging and should not be modified as means of contextualization. So reinforcement learning, although tempting, would be ill-suited here.

Along the way I also learnt that values should be scalable. A good value system should present ideals that are well-defined enough to be consumable and sufficiently obtuse enough to be relevant to a spectrum of applications.

How I chose a value system

Ubuntu (Zulu pronunciation: [ùɓúntʼù]) is a Nguni Bantu term meaning “humanity”. It is often translated as “I am because we are,” and also “humanity towards others”, but is often used in a more philosophical sense to mean “the belief in a universal bond of sharing that connects all humanity.”
Growing up in Zimbabwe, one of the African countries whose societal values are underpinned by ubuntu, I was privileged to have been raised in a society that elevates all individuals so as to achieve collective happiness.
Archbishop Desmond Tutu describes ubuntu as meaning:

‘My humanity is caught up, is inextricably bound up, in what is yours’

and my grandmother’s advice on life and one’s place in a collective narrative was informed by this.

I chose ubuntu because of the importance it places on elevating the vulnerable, on the true worth of individuals and how this relies on their contribution to community. Ubuntu is a non-discriminatory value system whose principles are borderline abstract, making it a perfect foundation for raising good robots across a range of applications.

Building an ubuntu-led value system for self-driving cars

Press enter or click to view image in full size
Our early stage development ethics API

Working with the lead developer over at triple.black I sought out to build a minimum viable product (MVP) for autonomous vehicles and credit risk assessment as use-cases. To do this we created data sets for possible outcomes that would require a parity split. We then parameterized ubuntu values such as Stanlake Samkange’s ubuntu maxims from 1980, one of which declared:

“…if and when one is faced with a decisive choice between wealth and the preservation of the life of another human being, then one should opt for the preservation of life”.

We created a weighting system that assigned a score to each detected entity within a collision spectrum, prioritizing human life in order of most vulnerable to least (based on how they scored against a geographical median height-to-surface area ratio and their predicted ability to withstand the force of a collision).

Press enter or click to view image in full size
Our sandbox for credit risk assessment

We also created an endpoint for credit risk assessment instances where anonymized personal information is known. We then deployed the two endpoints on our node.js API to a cloud platform and made our first successful API call on the 15th of June, 2018 🎉.
The service can also be easily built into native software, which is very important for AI that needs to make decisions within a fraction of a second.

What’s next?

Currently, everyone can request access to our sandbox to input and edit data and test the API responses for each condition. We’re going to continue working to refine the weighting to make the ethics service production ready.
Finally, check out the ethics library here!

--

--

Babusi
Babusi

Written by Babusi

Design Strategist | Innovator

No responses yet