Publications
Journal Article
Artificial intelligence in dry eye disease
The Ocular Surface 23 (2022): 74-86.Status: Published
Artificial intelligence in dry eye disease
Dry eye disease (DED) has a prevalence of between 5 and 50%, depending on the diagnostic criteria used and population under study. However, it remains one of the most underdiagnosed and undertreated conditions in ophthalmology. Many tests used in the diagnosis of DED rely on an experienced observer for image interpretation, which may be considered subjective and result in variation in diagnosis. Since artificial intelligence (AI) systems are capable of advanced problem solving, use of such techniques could lead to more objective diagnosis. Although the term ‘AI’ is commonly used, recent success in its applications to medicine is mainly due to advancements in the sub-field of machine learning, which has been used to automatically classify images and predict medical outcomes. Powerful machine learning techniques have been harnessed to understand nuances in patient data and medical images, aiming for consistent diagnosis and stratification of disease severity. This is the first literature review on the use of AI in DED. We provide a brief introduction to AI, report its current use in DED research and its potential for application in the clinic. Our review found that AI has been employed in a wide range of DED clinical tests and research applications, primarily for interpretation of interferometry, slit-lamp and meibography images. While initial results are promising, much work is still needed on model development, clinical testing and standardisation.
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Journal Article |
Year of Publication | 2022 |
Journal | The Ocular Surface |
Volume | 23 |
Pagination | 74 - 86 |
Date Published | Jan-01-2022 |
Publisher | Elsevier |
ISSN | 15420124 |
Keywords | artificial intelligence, Dry eye disease, Machine learning |
URL | https://linkinghub.elsevier.com/retrieve/pii/S1542012421001324 |
DOI | 10.1016/j.jtos.2021.11.004 |
Bias i kvantitativ analyse innen velferd
Tidsskrift for velferdsforskning (2022).Status: Accepted
Bias i kvantitativ analyse innen velferd
Ifølge Norges nasjonale strategi for kunstig intelligens (2020) er offentlig forvaltning og helse blant Norges satsningsområder for bruk av kunstig intelligens. Maskinlæring er en undergruppe av kunstig intelligens med potensiale for å løse en rekke utfordringer, men som også gir opphav til utfordringer. En slik utfordring er bias, eller skjevhet. Et eksempel på skjevhet er at tilstedeværende ulikheter i samfunnet representeres i datagrunnlaget maskinlæringsmodeller utvikles på. De resulterende modellene står dermed i fare for å adoptere og videreføre disse ulikhetene. En utfordring er at skjevhet har ulike definisjoner innen ulike fagområder, og kan ha mange ulike opphav. Vi bidrar til å løse denne utfordringen ved å gi en oversikt over ulike typer skjevhet og deres opphav med illustrasjoner fra et velferdsperspektiv, samt avklarer forskjellen til det nærliggende konseptet rettferdighet. Vi demonstrerer utfordringer relatert til databaserte modellers oppførsel ved å benytte maskinlæring til å predikere fremtidig ressursbehov i helsevesenet, spesifikt antall legebesøk i kommuner. Vi demonstrerer ulike typer skjevheter, diskuterer mulige løsninger og bruker metoder fra forklarbar kunstig intelligens for å analysere opphavet til skjevheter i forklaringsvariablene. Det finnes ingen universell løsning for å håndtere alle typer skjevheter, men skjevhet må tas høyde for i alle deler av en kvantitativ analyse.
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Journal Article |
Year of Publication | 2022 |
Journal | Tidsskrift for velferdsforskning |
Publisher | Universitetsforlaget |
Place Published | Tidsskrift for velferdsforskning |
Keywords | forklarbar kunstig intelligens, Maskinlæring, skjevhetsbegreper, velferdsforskning, XGBoost |
Proceedings, refereed
Automatic Unsupervised Clustering of Videos of the Intracytoplasmic Sperm Injection (ICSI) Procedure
In NAIS 2022. NAIS 2022, 2022.Status: Accepted
Automatic Unsupervised Clustering of Videos of the Intracytoplasmic Sperm Injection (ICSI) Procedure
The in vitro fertilization procedure called intracytoplasmic sperm injection can be used to help fertilize an egg by injecting a single sperm cell directly into the cytoplasm of the egg. In order to evaluate, refine and improve the method in the fertility clinic, the procedure is usually observed at the clinic. Alternatively, a video of the procedure can be examined and labeled in a time-consuming process. To reduce the time required for the assessment, we propose an unsupervised method that automatically clusters video frames of the intracytoplasmic sperm injection procedure. Deep features are extracted from the video frames and form the basis for a clustering method. The method provides meaningful clusters representing different stages of the intracytoplasmic sperm injection procedure. The clusters can lead to more efficient examinations and possible new insights that can improve clinical practice. Further on, it may also contribute to improved clinical outcomes due to increased understanding about the technical aspects and better results of the procedure. Despite promising results, the proposed method can be further improved by increasing the amount of data and exploring other types of features.
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Proceedings, refereed |
Year of Publication | 2022 |
Conference Name | NAIS 2022 |
Publisher | NAIS 2022 |
Keywords | clustering, Computer Vision and Pattern Recognition (cs.CV), human reproduction, medical videos, Unsupervised learning |
Huldra: A Framework for Collecting Crowdsourced Feedback on Multimedia Assets
In ACM Multimedia Systems (MMSys) Conference. The ACM Multimedia Systems Conference (MMSys): ACM, 2022.Status: Accepted
Huldra: A Framework for Collecting Crowdsourced Feedback on Multimedia Assets
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Proceedings, refereed |
Year of Publication | 2022 |
Conference Name | ACM Multimedia Systems (MMSys) Conference |
Publisher | ACM |
Place Published | The ACM Multimedia Systems Conference (MMSys) |
DOI | 10.1145/3524273.3532887 |
Predicting Tacrolimus Exposure in Kidney Transplanted Patients Using Machine Learning
In 35th IEEE CBMS International Symposium on Computer-Based Medical Systems. IEEE, 2022.Status: Accepted
Predicting Tacrolimus Exposure in Kidney Transplanted Patients Using Machine Learning
Tacrolimus is one of the cornerstone immunosuppressive drugs in most transplantation centers worldwide following solid organ transplantation. Therapeutic drug monitoring of tacrolimus is necessary in order to avoid rejection of the transplanted organ or severe side effects. However, finding the right dose for a given patient is challenging, even for experienced clinicians. Consequently, a tool that can accurately estimate the drug exposure for individual dose adaptions would be of high clinical value. In this work, we propose a new technique using machine learning to estimate the tacrolimus exposure in kidney transplant recipients. Our models achieve predictive errors that are at the same level as an established population pharmacokinetic model, but are faster to develop and require less knowledge about the pharmacokinetic properties of the drug.
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Proceedings, refereed |
Year of Publication | 2022 |
Conference Name | 35th IEEE CBMS International Symposium on Computer-Based Medical Systems |
Publisher | IEEE |
Keywords | Machine learning, personalized medicine, transplantation |
Research proposal: Explainability methods for machine learning systems for multimodal medical datasets
In ACM Multimedia Systems (MMSys) Conference. The ACM Multimedia Systems Conference (MMSys): ACM, 2022.Status: Accepted
Research proposal: Explainability methods for machine learning systems for multimodal medical datasets
This paper contains the research proposal of Andrea M. Storås that was presented at the MMSys 2022 doctoral symposium. Machine learning models have the ability to solve medical tasks with a high level of performance, e.g., classifying medical videos and detecting anomalies using different sources of data. However, many of these models are highly complex and difficult to understand. Lack of interpretability can limit the use of machine learning systems in the medical domain. Explainable artificial intelligence provides explanations regarding the models and their predictions. In this PhD project, we develop machine learning models for automatic analysis of medical data and explain the results using established techniques from the field of explainable artificial intelligence. Current research indicate that there are still open issues to be solved in order for end users to understand multimedia systems powered by machine learning. Consequently, new explanation techniques will also be developed. Different types of medical data are applied in order to investigate the generalizability of the methods.
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Proceedings, refereed |
Year of Publication | 2022 |
Conference Name | ACM Multimedia Systems (MMSys) Conference |
Publisher | ACM |
Place Published | The ACM Multimedia Systems Conference (MMSys) |
ISBN Number | 978-1-4503-9283-9/22/06 |
DOI | 10.1145/3524273.3533925 |
Unsupervised Image Segmentation via Self-Supervised Learning Image Classification
In MediaEval 2021. Working Notes Proceedings of the MediaEval 2021 Workshop ed. CEUR Workshop Proceedings, 2022.Status: Published
Unsupervised Image Segmentation via Self-Supervised Learning Image Classification
This paper presents the submission of team Medical-XAI for the Medico: Transparency in Medical Image Segmentation task held at MediaEval 2021. We propose an unsupervised method that utilizes tools from the field of explainable artificial intelligence to create segmentation masks. We extract heat maps, which are useful in order to explain how the `black box' model predicts the category of a certain image, and the segmentation masks are directly derived from the heat maps. Our results show that the created masks can capture the relevant findings to a certain extent using only a small amount of image-level labeled data for the classification model and no segmentation masks at all for the training. This is promising for addressing different challenges within the intersection of artificial intelligence for medicine such as availability of data, cost of labeling and interpretable and explainable results.
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Proceedings, refereed |
Year of Publication | 2022 |
Conference Name | MediaEval 2021 |
Edition | Working Notes Proceedings of the MediaEval 2021 Workshop |
Publisher | CEUR Workshop Proceedings |
Keywords | clustering, Explainable artificial intelligence, Global Features, Grad-CAM, Image segmentation, Medical imaging, Polyp Detection, Self-supervised learning |
URL | https://2021.multimediaeval.com/paper12.pdf |
Poster
Predicting drug exposure in kidney transplanted patients using machine learning
NORA Annual Conference 2022, 2022.Status: Accepted
Predicting drug exposure in kidney transplanted patients using machine learning
Afilliation | Machine Learning |
Project(s) | Department of Holistic Systems |
Publication Type | Poster |
Year of Publication | 2022 |
Place Published | NORA Annual Conference 2022 |
Type of Work | Poster presentation |