, 15 tweets, 4 min read Read on Twitter
So you're a believer in #AI #Ethics? We all are? How come there are so many twitter arguments about it then? The answer (in a Myers/Briggs* style) is because we have different attitudes to certain key questions. Thread. 1/15
Our first disagreement is around People and process (P) vs Technology (T). Do you focus on using frameworks, or on better tech to resolve the issue? 2/15
There are great frameworks out there: @AdaLovelaceInst (turing.ac.uk/research/data-…) and @Floridi, work we've done at @DataKindUK (datakind.org/blog/doing-dat…) with @christinelhenry, the EU (ec.europa.eu/digital-single…) and more 3/15
But there are also those who believe in a more technical approach: fatml.org for example. 4/15
The second split is around those who would Stop abuses (S) and those who would Grasp opportunities (G).

I was thinking of calling them glass half full and glass half empty. Both are important, and require careful balancing. 5/15
The third axis is caused by our reliance on data: Removing bias (R) or Understanding bias (U). 6/15
Removers want to curate data arxiv.org/pdf/1412.3756v… - take out the biases in data and we avoid biases in algorithms.

Understanders argue that we just can't do that because of multicolinearity/intersectionality and because it reduces accuracy (which comes with a cost) 7/15
My final tension is probably the biggest: those who insist we need to eXplain (X) algorithmic decisions, and those who think we should Analyse (A) their outputs. 8/15
This is where we see mud thrown around the irreducible complexity of neural nets (arxiv.org/abs/1608.08225), human agency etc...

I suspect it also reignites the debate between CompSci and Social Scientist, those who believe in the uniqueness of humanity and those who don't. 9/15
But my thesis is that none of these positions are as irreconcilable as we think. We need to bridge these gaps. 10/15
The Explainers need to accept that we can't always understand exactly how decisions are made. And the Analysers need to accept that exploring (to the extent possible) algorithmic decision spaces is a good thing. 11/15
Let's accept that we're all working towards a common goal, and that we are working with best intentions and appropriate concerns. 12/15
And that's why I've put together this shonky M/B alternative. When speaking to me about AI Ethics you can see that my starting point is as PGUA. You may be PSUX. Neither of us is right or wrong... 13/15
And with that, enjoy the rest of 2018, explore and make the world better in 2019. 14/15
Note: I haven't checked that there aren't any inappropriate combination of these letters. Also, Myers/Briggs is a terrible terrible thing. 15/15
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Duncan Ross
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!