Exploring the Frontier of Green Intelligent Homes: My Presentation in Prague

Photo by Capricious Wayfarer (Saptarshi) on Pexels.com

As we continue to embrace the benefits of smart technology, the concept of the Green Intelligent Home is an exciting and promising development in the evolution of smart homes. Our paper, which I presented at the IoTBDS conference in Prague last weekend, explores this frontier.

The possibilities of a world where Green Intelligent Homes are the norm are intriguing, as they offer increased automation, personalization, sustainability, and more. Nonetheless, as with any emerging technology, it is important to be aware of potential risks and implications. These range from security and privacy, manipulation of people, a lack of self-sufficiency, and more.    

As the Green Intelligent Home prospect continues to develop, it is essential to stay informed and explore the potential of this technology. In case you are interested in learning more about the Green Intelligent Home or collaborating on related projects, please get in touch.

Unveiling the Lack of Transparency in AI Research

Photo by FOX on Pexels.com

A recent systematic review by Burak Kocak MD et al. has revealed a lack of transparency in AI research. The data, presented in Academic Radiology, showed that only 18% of the 194 selected radiology and nuclear medicine studies included in the analysis had raw data available, with access to private data in only one paper. Additionally, just one-tenth of the selected papers shared the pre-modeling, modeling, or post-modeling files.

The authors of the study attributed this lack of availability mainly to the regulatory hurdles that need to be overcome in order to address privacy concerns. The authors suggested that manuscript authors, peer-reviewers, and journal editors could help make AI studies more reproducible in the future by being conscious of transparency and data/code availability when publishing research results.

The findings of the study highlight the importance of transparency in AI research. Without access to data and code, it is difficult to validate and replicate results, leading to a lack of trust in the results. This is especially important for medical AI research, as the safety and efficacy of treatments and diagnostics depend on accurate and reliable results. What further steps can be taken to increase transparency while still protecting privacy?

Exploring the Interdependencies between AI and Cybersecurity

Photo by Pixabay on Pexels.com

With the increasing prevalence of AI technology in our lives, it is important to understand the relationship between AI and cybersecurity. This relationship is complex, with a range of interdependencies between AI and cybersecurity. From the cybersecurity of AI systems to the use of AI in bolstering cyber defenses, and even the malicious use of AI, there are a number of different dimensions to explore.

  • Protecting AI Systems from Cyber Threats: As AI is increasingly used in a variety of applications, the security of the AI technology and its systems is paramount. This includes the implementation of measures such as data encryption, authentication protocols, and access control to ensure the safety and integrity of AI systems.
  • Using AI to Support Cybersecurity: AI-based technologies are being used to detect cyber threats and anomalies that may not be detected by traditional security tools. AI-powered security tools are being developed to analyze data and detect malicious activities, such as malware and phishing attacks.
  • AI-Facilitated Cybercrime: AI-powered tools can be used in malicious ways, from deepfakes used to spread misinformation to botnets used to launch DDoS attacks. The potential for malicious use of AI is a major concern for cybersecurity professionals.

In conclusion, AI and cybersecurity have a multi-dimensional relationship with a number of interdependencies. AI is being used to bolster cybersecurity, while at the same time it is being used for malicious activities. Cybersecurity professionals must be aware of the potential for malicious use of AI and ensure that the security of AI systems is maintained.

Human-centered AI Course

In the fall of 2019, I enrolled in the PhD course titled “Introduction to Human-centered AI. ” The course is delivered and managed by Cecilia Ovesdotter Alm from RIT university.

Human-centered AI is essentially a perspective on AI and ML that algorithms must be designed with awareness that they are part of a larger system consisting of human stakeholders. According to Mark O. Riedl,  the main requirements of human-centered AI can be broken into two aspects: (a) AI systems that have an understanding of human sociocultural norms as part of a theory of mind about people, and (b) AI systems that are capable of producing explanations that non-experts in AI or computer science can understand.

Human-centered AI

Course introduction lecture held at Malmö University (2019).

One of the course learning outcomes is to be able to demonstrate critical thinking concerning bias and fairness in data analysis, including but not limited to gender aspects. With regard to this, I have put together a 10 minutes presentation of the article “50 Years of Test (Un)fairness: Lessons for Machine Learning” written by Ben Hutchinson and Margaret Mitchell.