Exploring Some Misconceptions and Complexities of Artificial Intelligence

Artificial intelligence (AI) is a rapidly advancing field that has the potential to revolutionize many aspects of our daily lives. However, as with any complex subject, there are often misunderstandings and misconceptions about what AI is and what it can do. In this article, we will explore some of these misconceptions.

The intersection of reasoning and learning in AI techniques. AI techniques can be broadly grouped into two categories based on their ability to reason and learn. However, these techniques are not mutually exclusive. For example, expert systems, which involve reasoning, may also incorporate elements of learning, such as the ability to adjust the rules or weightings based on past performance or feedback.

The versatility of machine learning. Machine learning is a technique that enables AI systems to learn how to solve problems that cannot be precisely specified or whose solution method cannot be described by symbolic reasoning rules. However, machine learning is not limited to solving these types of problems. It can also be used to learn from structured data and can be combined with symbolic reasoning techniques to achieve a wider range of capabilities. 

The diversity of machine learning techniques. Machine learning definitions and sometimes taxonomies only mention supervised, unsupervised, and reinforcement learning. However, there are other types of machine learning, such as semi-supervised learning and active learning.  These different types of machine learning each have their own unique characteristics and are suited to different types of problems and data.

The relationship between AI and robotics. AI and robotics are closely related fields that often overlap, but they are distinct areas of study. While robotics can be considered a subfield of AI, it is possible to study robotics independently of AI. Similarly, AI can be studied without necessarily delving into the field of robotics. 

In conclusion, the field of AI is vast and complex, with many nuances and misconceptions that are important to understand. Despite these complexities, the potential for AI to revolutionize many aspects of our lives makes it a field worth exploring and understanding.

The Different Types of Privacy-Preserving Schemes

Machine learning (ML) is a subset of artificial intelligence (AI) that provides systems the ability to automatically improve and learn from experience without explicit programming. ML has led to important advancements in a number of academic fields, including robotics, healthcare, natural language processing, and many more. With the ever-growing concerns over data privacy, there has been an increasing interest in privacy-preserving ML. In order to protect the privacy of data while still allowing it to be used for ML, various privacy-preserving schemes have been proposed. Here are some of the main schemes:

Secure multiparty computation (SMC) is a type of privacy-preserving scheme that allows multiple parties to jointly compute a function over their data while keeping their data private. This is achieved by splitting the data up among the parties and having each party perform a computation on their own data. The results of the computations are then combined to obtain the final result.

Homomorphic encryption (HE) is a type of encryption that allows computations to be performed on encrypted data. This type of encryption preserves the structure of the data, which means that the results of the computations are the same as if they had been performed on unencrypted data. HE can be used to protect the privacy of data while still allowing computations to be performed on that data.

Differential privacy (DP) is a type of privacy preservation that adds noise to the data in order to mask any individual information. This noise is added in a way that it does not affect the overall results of the data. This noise can be added in a variety of ways, but the most common is through the Laplace mechanism. DP is useful for preserving privacy because it makes it difficult to determine any individual’s information from the dataset. 

Gradient masking is a technique that is used to prevent sensitive information from being leaked through the gradients of an ML model – the gradients are the partial derivatives of the loss function with respect to the model parameters. This is done by adding noise to the gradients in order to make them more difficult to interpret. This is useful for privacy preservation because it makes it more difficult to determine the underlying data from the gradients.

Security enclaves (SE) are hardware or software environments that are designed to be secure from tampering or interference. They are often used to store or process sensitive data, such as cryptographic keys, in a way that is isolated from the rest of the system. 

There are many ways to preserve privacy when working with ML models, each with their own trade-offs. In this article, we summarised five of these methods. All of these methods have strengths and weaknesses, so it is important to choose the right one for the specific application.

Interactive Event on Digital Ethics

On Friday, 23th April, I attended an interactive event on the topic of digital ethics. This event was organised by RISE in collaboration with industry. Together, we explored and discussed the topic of data privacy, integrity, trust, and transparency in AI. Many interesting discussions followed in Zoom breakout rooms, especially after the presentation from “Sjyst data!” project.

We talked about the generic development and implementation of AI for emerging systems, and related ethical implications. An interesting point was raised about the passive collection of MAC addresses and whether these are considered personal data by the GDPR. On that note, over Zoom chat, someone also mentioned foot traffic data and the processing of that, especially during the pandemic of Covid-19. Data, even though, may appear to mean nothing particular or worrying to us at some point, when aggregated and linked with other data sources, it can paint a detailed profile about us.

Here is a screenshot showing the event hosts: Nina Bozic (senior researcher) and Katarina Pietrzak (educational strategist) along with RISE experts and guests.

Interactive event on Digital Ethics

I am looking forward to the next one!

Interesting Book Showed Up In My Mailbox

Today, I am happy to have received a hardcopy of the book – Privacy and Identity Management. Data for Better Living: AI and Privacy. There is a chapter in this book, which I have authored together with my academic advisor titled: “On the Design of a Privacy-Centered Data Lifecycle for Smart Living Spaces.” In that article, I have identified how the software development process can be enhanced to manage privacy threats, amongst other things.

Privacy and Identity Management

Hardcopy of the book “Privacy and Identity Management. Data for Better Living: AI and Privacy”

All the articles included in the book are certainly worth a read covering various aspects of privacy ranging from a technical, compliance, and law perspective.