XAIOmics: Explainable Artificial Intelligence in Life Science: An Application to Omics Data
- Project Group:
Ali Sunyaev, Scott Thiebes, Philipp Toussaint
German Cancer Research Center (DKFZ)
As it is becoming progressively challenging to wholly analyse the ever-increasing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) by means of conventional analysis techniques, researchers and practitioners are turning to artificial intelligence (AI) approaches (e.g., deep learning) to analyse their data. Although the application of AI to biomedical data in many cases promises to deliver improved performance and accuracy, extant AI approaches often suffer from opacity. Their sub-symbolic representation of state is often inaccessible and non-transparent to humans, thus limiting us in fully understanding and therefore trusting the produced outputs. Explainable AI (XAI) describes a recent trend in AI research with the aim of addressing the opacity issue of contemporary AI approaches by producing (more) interpretable AI models whilst maintaining high levels of performance and accuracy. The objective of the XAIOmics research project is to design, develop, and evaluation an XAI approach to biomedical (i.e., omics) data. In particular, we will identify biomedical use cases and current, viable approaches in the domain of XAI and apply and adapt them to the identified use cases. With regards to the highly interdisciplinary field, a central research hurdle will be the development of an understanding for the different kinds of biomedical data and the subsequent feature engineering in the context of the design of the AI algorithms. In doing so, this project will not only aid researchers and physicians in obtaining a better understanding of the outputs of contemporary AI approaches for biomedical data but also create more transparency, which will support the building of trust in AI-based treatment and diagnosis decisions in personalized medicine.