A recent systematic review by Burak Kocak MD et al. has revealed a lack of transparency in AI research. The data, presented in Academic Radiology, showed that only 18% of the 194 selected radiology and nuclear medicine studies included in the analysis had raw data available, with access to private data in only one paper. Additionally, just one-tenth of the selected papers shared the pre-modeling, modeling, or post-modeling files.
The authors of the study attributed this lack of availability mainly to the regulatory hurdles that need to be overcome in order to address privacy concerns. The authors suggested that manuscript authors, peer-reviewers, and journal editors could help make AI studies more reproducible in the future by being conscious of transparency and data/code availability when publishing research results.
The findings of the study highlight the importance of transparency in AI research. Without access to data and code, it is difficult to validate and replicate results, leading to a lack of trust in the results. This is especially important for medical AI research, as the safety and efficacy of treatments and diagnostics depend on accurate and reliable results. What further steps can be taken to increase transparency while still protecting privacy?