Blog

AI in Biology: Risk, Ethics and Interpretability | Phytl Signs

Written by Nigel Wallbridge | May 1, 2019 1:13:48 PM

News of the recent death of Sydney Brenner, a renowned molecular biologist and enthusiastic conservationist, lead the team at Vivent into a long conversation on ethics and science.

At Vivent, the ethical implications of our work are important even if we can’t always draw firm conclusions about best approaches. Our recent discussions centered around the ethics of using predictions based on artificial intelligence (AI) models.

If we trained a machine learning system about all the anaesthetics we know, and then tested the system and it produced excellent predictive capability, we could then use the system to find new anaesthetics.  We could do the same thing to find new drugs.  At some point these new chemicals would have to be tested on animals and then on human beings.  What is the ethical position on testing chemicals which have been selected by machines using a reasoning we don’t understand?

Some members of the AI community, as well as medical practitioners, baulk at using answers from a machine which are not ‘interpretable’.  In other words, they require that the reasoning behind the AI must be investigated before high risk procedures are undertaken.

We can consider that, in medicine, there is often some time to consider these issues.  In self-driving cars, the AI must make critical decisions and act on them in milliseconds.  But if you are critically ill and the AI suggests an untested drug, what are the ethics of denying that treatment because the interpretability is low?

Perhaps Sydney Brenner was onto something: “Progress in science depends on new techniques, new discoveries and new ideas, probably in that order.” ~ Sydney Brenner