A database for publications published by researchers and students at SimulaMet.
Research area
Talks, invited
Soccer Athlete Performance Prediction using Time Series Analysis
In NORA Annual Conference, 2022.Status: Accepted
Soccer Athlete Performance Prediction using Time Series Analysis
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Talks, invited |
Year of Publication | 2022 |
Location of Talk | NORA Annual Conference |
AI-Based Video Production for Soccer
In FOKUS Media Web Symposium, 2022.Status: Accepted
AI-Based Video Production for Soccer
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Talks, invited |
Year of Publication | 2022 |
Location of Talk | FOKUS Media Web Symposium |
URL | https://www.fokus.fraunhofer.de/go/mws |
7 Things They Don't Tell You About Streaming Analytics
In Demuxed, 2022.Status: Accepted
7 Things They Don't Tell You About Streaming Analytics
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Talks, invited |
Year of Publication | 2022 |
Location of Talk | Demuxed |
URL | https://2022.demuxed.com/#speakers |
Explainable Artificial Intelligence in Medicine
In Nordic AI Meet 2022, 2022.Status: Accepted
Explainable Artificial Intelligence in Medicine
Machine learning (ML) has shown outstanding abilities to solve a large variety of tasks such as image recognition and natural language processing, which has huge relevance for the medical field. Complex ML models, including convolutional neural networks (CNNs), are used to analyse high dimensional data such as images and videos from medical examinations. With increasing model complexity, the demand for techniques improving human understanding of the ML models also increases. If medical doctors do not understand how the models work, they might not know when the models are actually wrong or even refuse to use them. This can hamper the implementation of ML systems in the clinic and negatively affect patients. To promote successful integration of ML systems in the clinic, it is important to provide explanations that establish trust in the models among healthcare personnel. Explainable artificial intelligence (XAI) aims to provide explanations about ML models and their predictions. Several techniques have already been developed. Existing XAI methods often fail to meet the requirements of medical doctors, probably because they are not sufficiently involved in the development of the methods. We develop ML models solving tasks in various medical domains. The resulting models are explained using a selection of existing XAI methods, and the explanations are evaluated by medical experts. Their feedback is used to develop improved XAI methods. We have investigated established techniques for making ML systems more transparent in the fields of gastroenterology, assisted reproductive technology, organ transplantation and cardiology. Experiences from our projects will be used to develop new explanation techniques for clinical practice in close collaboration with medical experts.
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Talks, invited |
Year of Publication | 2022 |
Location of Talk | Nordic AI Meet 2022 |
Keywords | Explainable artificial intelligence, Machine learning, medicine |