Unveiling the Lack of Transparency in AI Research

Photo by FOX on Pexels.com

A recent systematic review by Burak Kocak MD et al. has revealed a lack of transparency in AI research. The data, presented in Academic Radiology, showed that only 18% of the 194 selected radiology and nuclear medicine studies included in the analysis had raw data available, with access to private data in only one paper. Additionally, just one-tenth of the selected papers shared the pre-modeling, modeling, or post-modeling files.

The authors of the study attributed this lack of availability mainly to the regulatory hurdles that need to be overcome in order to address privacy concerns. The authors suggested that manuscript authors, peer-reviewers, and journal editors could help make AI studies more reproducible in the future by being conscious of transparency and data/code availability when publishing research results.

The findings of the study highlight the importance of transparency in AI research. Without access to data and code, it is difficult to validate and replicate results, leading to a lack of trust in the results. This is especially important for medical AI research, as the safety and efficacy of treatments and diagnostics depend on accurate and reliable results. What further steps can be taken to increase transparency while still protecting privacy?

The CNIL’s Privacy Research Day

The first CNIL’s International Conference on Research in Privacy took place in Paris yesterday, June 28, and was broadcast online for free. In addition to providing a great opportunity to consider the influence of research on regulation and vice versa, this conference facilitated the building of bridges between regulators and researchers.

During the day, experts from different fields presented their work and discussed its impact on regulation and vice-versa. I attended it online — there were many interesting topics covered by the different panelists. The topics ranged from the economics of privacy, smartphones and apps, AI and explanation, and more. Surely, one of the panels that I liked was that on AI and explanation. 

Machine learning algorithms are becoming more prevalent, so it is important to examine other factors in addition to optimal performance when evaluating them. Among these factors, privacy, ethics, and explainability should be given more attention. Many of the interesting pieces I see here are related to what I and my colleagues are working on right now and what I have planned for my upcoming projects.

You are welcome to contact me if you are curious about what I am working on and would want to collaborate.

Panel Discussion on the topic of Designing IoT Systems

I was invited to participate in a panel discussion at Malmö University on Friday, April 8th. The topic of “Designing IoT Systems” was the one I was asked to speak about. There were representatives from Sony and Sigma Connectivity in the panel with me. Concerns about trustworthiness were a major topic of discussion during the session. 

Safety, security, privacy, reliability, and resilience tend to be identified by several researchers as the main trustworthiness concerns in the IoT domain. These concerns are there to ensure that systems function as intended in a variety of situations.

According to several academics, the most challenging aspects of designing trustworthy IoT systems are achieving privacy and security. From applications to devices, each layer of the Internet of Things has its own set of security risks and potential attacks. From a research perspective, a hot topic is that of building energy-efficient security, along with scalable and dynamic security architectures. Preserving data privacy in the IoT, on the other hand, is also particularly challenging. Existing IoT privacy mechanisms are often built for single services, and not necessarily for interdependent, dynamic, and heterogeneous services. Building new privacy preservation techniques for interdependent services is a hot topic, as is federated learning when it comes to data privacy.

Panel discussion on the topic of “Designing IoT Systems”

Finally, there are a number of standards that pertain to trustworthiness. ISO/IEC 30147 “Integration of trustworthiness in IoT lifecycle processes” and ISO/IEC 30149 “IoT trustworthiness principles” are two ISO/IEC standards.

If you want to collaborate with me or learn more about a specific topic that is related to my research topics, please send me an email.

The Importance of Information Ethics in the Digital Age

Over the years, the world has witnessed a technological evolution that has resulted in the World Wide Web becoming a location where information about individuals is acquired and spread. Information ethics is a subset of ethics that investigates the impact of information technology on society. It draws on a variety of fields, including philosophy, law, and computer science. Information ethics seeks to assist us in thinking about how we, as individuals, companies, governments, and societies, think about information: what it is, where it comes from, and how we use it. With the rapid rise of ubiquitous computing and networks, it is becoming an increasingly essential topic of research.

As our world gets more interconnected, individuals must make more responsible decisions about how they acquire, use, and share information with others. Making these decisions can be challenging at times, especially when there is little information available to assist us in deciding what is acceptable and what is not. If one’s actions or inactions have the potential to cause harm to others, one should be held accountable. Information ethics looks at what is right and wrong in relation to information systems. But where can we find these rules, and how can we apply them to the Internet, particularly to the Internet of Things, where certain key decisions are made automatically by machines?

This is a topic that I have been researching for the past few months. I was able also to publish a paper on this topic. If you are a scholar or simply are interested to explore ethics, I recommend reading the book “Ethics & Technology: Controversies, Questions, and Strategies for Ethical Computing” by Herman T. Tavani.

Interactive Event on Digital Ethics

On Friday, 23th April, I attended an interactive event on the topic of digital ethics. This event was organised by RISE in collaboration with industry. Together, we explored and discussed the topic of data privacy, integrity, trust, and transparency in AI. Many interesting discussions followed in Zoom breakout rooms, especially after the presentation from “Sjyst data!” project.

We talked about the generic development and implementation of AI for emerging systems, and related ethical implications. An interesting point was raised about the passive collection of MAC addresses and whether these are considered personal data by the GDPR. On that note, over Zoom chat, someone also mentioned foot traffic data and the processing of that, especially during the pandemic of Covid-19. Data, even though, may appear to mean nothing particular or worrying to us at some point, when aggregated and linked with other data sources, it can paint a detailed profile about us.

Here is a screenshot showing the event hosts: Nina Bozic (senior researcher) and Katarina Pietrzak (educational strategist) along with RISE experts and guests.

Interactive event on Digital Ethics

I am looking forward to the next one!