A guest post by Scott Fischaber from Analytics Engines
Analytics Engines is supporting MIDAS in its aim to map, acquire, manage, model, process, and exploit existing heterogeneous healthcare data and other governmental data. Doing this successfully requires partners to explore analytics, machine learning, and data mining technologies.
But how do we ensure the technology is being deployed ethically?
From the inception of the project, MIDAS partners have been engaging in many conversations exploring the ethical use of data as well as transparency and trust in the application of emerging technologies.
The legislative framework primarily addressing privacy arrived in the form of GDPR in 2016. Article 22 of the regulations specifically address issues surrounding automated individual decision-making (including profiling), and this has implications for how data controllers utilise analytics, AI, and machine learning.
And there are several legal protections in place governing the use of AI including EU primary law and secondary law; the UN Human Rights treaties and the Council of Europe conventions; and EU member state laws. We have seen during the lifetime of the project legislation coming forward in Finland and Spain around the secondary use of data in research.
But legal frameworks and privacy protection do not fully serve to provide a complete answer on the ethics behind transformative technologies.
Writing on this blog more than two years ago, Paul Carlin from the South Eastern Health and Social Care Trust felt that although legislation and regulation “set the tone, they will not necessarily fully define the ethical domain to which they pertain”.
For those wishing to use large volume data, he noted, “there must be thought given to the potential and actual ethical issues that could impact both the individual and society.”
The debate surrounding the ethical use of AI has been long held in a range of fora. For example, in January 2016 the Founder and Executive Chairman of the World Economic Forum Klaus Schwab noted in a blog about the Fourth Industrial Revolution that “the revolutions occurring in biotechnology and AI… will compel us to redefine our moral and ethical boundaries”.
During the latest Big Data Belfast conference, which is one of the biggest data analytics events in Ireland and is run annually by Analytics Engines, Professor Mark Keane from University College Dublin addressed the conference on the promises and pitfalls of big data and ethical AI. For Mark, there must be fair ways to convey decisions; preservation of privacy; acceptable uses for data; permission and informed consent.
As mentioned above, within MIDAS, we have been investigating acceptable uses for data, permission, and informed consent, and we have also been looking at privacy-preserving analytics using federated learning. His presentation was thought-provoking on many levels. Still, as we near the end of the MIDAS Project and look to the future, I found his content around Explainable AI (XAI) particularly engaging (Mark Keane has co-authored an excellent paper on a twin-system approach to XAI).
An independent 52-member High-Level Expert Group set up by the European Commission stated in its ‘Ethics Guidelines for Trustworthy AI‘ that: “For a system to be trustworthy, we must be able to understand why it behaved a certain way and why it provided a given interpretation. A whole field of research, Explainable AI (XAI) tries to address this issue to understand the system’s underlying mechanisms better and find solutions.”
There has been much progress within this space, both at a research level and commercially. Indeed, Google launched its Explainable AI toolset in November.
Where systems and technologies are more robust and reliable, and the decision-making process made more transparent through explanation, inroads can be made towards building trust and confidence among users. XAI is one of several new techniques that could be investigated to support our overall aims as we go beyond the end of the MIDAS Project and look forward to the future.