The iHelp consortium is building an ambitious and far-reaching study using several inter-connected approaches to contribute to more effective risk detection and prevention of pancreatic cancer using predictive interventions that should influence both practice and policy. The aim is that the results of this study can subsequently be implemented in the prevention of other health conditions that are similarly difficult to detect and diagnose.
Using AI with Big Data in healthcare presents opportunities and challenges. AI has the potential to bridge Big Data and clinical applications, but there is a need for evaluation and review from the perspective of medical ethics, public health, and other social science disciplines. Beyond the architectural complexity and the complexity of integrating the data from a myriad of sources, it is crucial that any application of AI to Big Data in a clinical setting consider the patient as well. This includes ethical considerations as well as whether and how the data are reliable, accurate, and even useful for improving patient outcomes. This is especially true when analyzing data from social media. Here, our project must be especially sensitive to social, cultural, age, and other issues that affect access to social media and thus representation in these data.
In November 2021, the 193 Member States at UNESCO’s General Conference adopted the Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject. The aim is not only to protect but also to promote human rights and human dignity. An ethical guiding compass and a global normative bedrock allowing to build strong respect for the rule of law in the digital world. The binding “EU AI act” is close to becoming a reality now as it targets specific applications of artificial intelligence, not general systems. As in Virginia Dignum‘s optimistic vision of the future, artificial intelligence makes everyone’s lives better. We need to have a better understanding of the risks. AI is not alone in wavering between ethical and commercial ambitions. Questions of AI and ethics must be taken seriously. These important ethical considerations are something the iHelp consortium is aware of and works continuously with. Our Ethics Board consisting of three experts from external organizations works in line with fundamental standard rules (The European Standards Organisations (ESOs)) including other requirements ensuring the safety of people and businesses as well as risks that we must address to avoid undesirable outcomes.
It is not like we can either get ethical, reliable AI or we can get innovation. We can have both.
Regulation is necessary so that we do not waste time on misguided sidetracks. A viable way forward may be a type of driver’s license for AI services, to reduce the risks while we jointly produce a plan for its future use.