Data Security and Privacy in the Era of Floating Homes

Slightly over a year ago, I mentioned Ocean Builders’ innovative living pods and how they are using smart home technologies in their vessels. Now, a new contender, Reina, takes the stage. Reina’s flagship yacht home model, the luxurious Reina Live L44DR, showcases not only lavishness but enhanced comfort and convenience also by incorporating smart home functionalities (smart TV, smart speakers, etc.).

The transition from a fixed abode to a mobile dwelling incites inquiry. Can a floating home offer a higher degree of security and privacy compared to its stationary counterpart? Do the potential challenges of connectivity experienced within the realm of floating homes share similarities with those encountered in the context of connected cars and trucks? Beyond concerns about location privacy, the intricate facets of this discourse warrant scholarly exploration, as the enduring appeal of these aquatic residences persists. This theme was also briefly addressed in one of the recent conferences at which I presented.

EU Data Initiatives: Developments to Watch in 2024 and Beyond

The European Union (EU) has been at the forefront of global efforts to protect privacy and personal data. Over the years, the EU has implemented several initiatives and regulations that aim to safeguard the privacy rights of its citizens. The International Association of Privacy Professionals (IAPP) has created a timeline of key dates for these EU regulations and initiatives, including those that are yet to be finalized.

Photo by freestocks.org on Pexels.com

Here are the key dates to watch out for the year 2024 and beyond:

  • February 17, 2024: The Digital Services Act (DSA), which aims to establish clear rules for online platforms and strengthen online consumer protection, will become applicable
  • Spring 2024: The AI Act is expected to be adopted
  • Mid-2024: The Data Act is expected to enter into force
  • October 18, 2024: The NIS2 directive will become applicable
  • January 17, 2025: The DORA regulation will become applicable

In conclusion, the EU’s data initiatives are set to undergo significant changes in the coming years with the implementation of regulations like the DSA, AI Act, Data Act, NIS2 directive, and DORA regulation. These initiatives aim to establish clear rules for online platforms, strengthen online consumer protection, facilitate data sharing, and more. It is crucial for organizations, including individuals, to stay up-to-date with these key dates to ensure compliance with the new regulations and to take advantage of the opportunities they present.

For a more detailed overview of the EU’s data initiatives and their key dates, check out the infographic created by the IAPP here.

Explore the Future of Smart Home Technology with Amazon’s Dream Home

Photo by Jessica Lewis Creative on Pexels.com

From Amazon’s Echo to its Ring doorbell, the tech giant has made its way into many of our homes. But do you know what Amazon is learning about you and your family? From its smart gadgets, services, and data collection, Amazon has the potential to build a detailed profile of its users.

The data collected by Amazon can help power an “ambient intelligence” to make our home smarter, but it can also be a surveillance nightmare. Amazon may not “sell” our data to third parties, but it can use it to gain insights into our buying habits and more.

We must all decide how much of our lives we’re comfortable with Big Tech tracking us. Read the story authored by Geoffrey A. Fowler here to explore ways in which Amazon and potentially other Big Tech companies are watching us.

If you want to learn more about cyber security and smart homes, don’t hesitate to get in touch with me! I’m always happy to answer any questions and always look for collaboration opportunities.

The Importance of Trustworthiness in the Age of the IoT: My First Article on Medium

There are many definitions of trustworthiness, but in general it can be described as the ability of a system to meet its objectives while adhering to a set of principles or guidelines. In the context of the IoT, the term “trustworthiness” is often used to refer to the ability of IoT devices and systems to accurately and reliably collect and communicate data.

If you would like to learn more about trustworthiness in the IoT, I suggest reading my latest article on Medium. In the article, I discuss the importance of trustworthiness in the age of the IoT. I also describe trustworthiness and explain why it is important for devices in the IoT. Moreover, I discuss some of the factors that contribute to trustworthiness in the IoT, including reliability, security, and transparency. Finally, I offer some tips on how individuals can ensure that their IoT devices and data are trustworthy.

The Different Types of Privacy-Preserving Schemes

Machine learning (ML) is a subset of artificial intelligence (AI) that provides systems the ability to automatically improve and learn from experience without explicit programming. ML has led to important advancements in a number of academic fields, including robotics, healthcare, natural language processing, and many more. With the ever-growing concerns over data privacy, there has been an increasing interest in privacy-preserving ML. In order to protect the privacy of data while still allowing it to be used for ML, various privacy-preserving schemes have been proposed. Here are some of the main schemes:

Secure multiparty computation (SMC) is a type of privacy-preserving scheme that allows multiple parties to jointly compute a function over their data while keeping their data private. This is achieved by splitting the data up among the parties and having each party perform a computation on their own data. The results of the computations are then combined to obtain the final result.

Homomorphic encryption (HE) is a type of encryption that allows computations to be performed on encrypted data. This type of encryption preserves the structure of the data, which means that the results of the computations are the same as if they had been performed on unencrypted data. HE can be used to protect the privacy of data while still allowing computations to be performed on that data.

Differential privacy (DP) is a type of privacy preservation that adds noise to the data in order to mask any individual information. This noise is added in a way that it does not affect the overall results of the data. This noise can be added in a variety of ways, but the most common is through the Laplace mechanism. DP is useful for preserving privacy because it makes it difficult to determine any individual’s information from the dataset. 

Gradient masking is a technique that is used to prevent sensitive information from being leaked through the gradients of an ML model – the gradients are the partial derivatives of the loss function with respect to the model parameters. This is done by adding noise to the gradients in order to make them more difficult to interpret. This is useful for privacy preservation because it makes it more difficult to determine the underlying data from the gradients.

Security enclaves (SE) are hardware or software environments that are designed to be secure from tampering or interference. They are often used to store or process sensitive data, such as cryptographic keys, in a way that is isolated from the rest of the system. 

There are many ways to preserve privacy when working with ML models, each with their own trade-offs. In this article, we summarised five of these methods. All of these methods have strengths and weaknesses, so it is important to choose the right one for the specific application.

The FTC wants to crack down on mass surveillance 

The practice of gathering, analyzing, and profiting from data about individuals is known as commercial surveillance. Due to the volume of data gathered by some companies, individuals may be vulnerable to identity theft and hacking. Indeed, the dangers and stakes of errors, deception, manipulation, and other abuses have increased as a result of mass surveillance. The Federal Trade Commission (FTC) is seeking input from the general public on whether additional regulations are necessary to safeguard individuals’ privacy and personal data in the commercial surveillance economy.

Photo by Lianhao Qu on Unsplash.

I advise you to attend the open forum on September 8, 2022, particularly if you are a researcher focusing on the topic of privacy and security. Also, if you are developing your own system or perhaps planning your next research project, I highly recommend you look at some of the topics identified by the FTC as these are likely to affect the design of your project. Here are the topics mentioned: “Harms to Consumers”, “Harms to Children”, “Costs and Benefits”, “Regulations”, “Automated Systems”, “Discrimination”, “Consumer Consent”, “Notice, Transparency, and Disclosure”, “Remedies”, and “Obsolescence”. Pay particular attention to the topic “Automated Systems” if your system uses AI/ML technologies.

More information can be found here: https://www.ftc.gov/legal-library/browse/federal-register-notices/commercial-surveillance-data-security-rulemaking and https://www.ftc.gov/news-events/news/press-releases/2022/08/ftc-explores-rules-cracking-down-commercial-surveillance-lax-data-security-practices

The CNIL’s Privacy Research Day

The first CNIL’s International Conference on Research in Privacy took place in Paris yesterday, June 28, and was broadcast online for free. In addition to providing a great opportunity to consider the influence of research on regulation and vice versa, this conference facilitated the building of bridges between regulators and researchers.

During the day, experts from different fields presented their work and discussed its impact on regulation and vice-versa. I attended it online — there were many interesting topics covered by the different panelists. The topics ranged from the economics of privacy, smartphones and apps, AI and explanation, and more. Surely, one of the panels that I liked was that on AI and explanation. 

Machine learning algorithms are becoming more prevalent, so it is important to examine other factors in addition to optimal performance when evaluating them. Among these factors, privacy, ethics, and explainability should be given more attention. Many of the interesting pieces I see here are related to what I and my colleagues are working on right now and what I have planned for my upcoming projects.

You are welcome to contact me if you are curious about what I am working on and would want to collaborate.

Threat Modeling: Some of the Best Methods

Threat modeling methods are a set of general principles and practices for identifying cyber threats to computer systems and software. These methods can be applied during the design phase of new systems or when assessing existing security controls against new threats. There are several threat modeling methodologies in use today, ranging from informal processes to formalized models that can be captured within software tools. A summary of some of the most popular threat modeling methods is provided below:

• Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of privilege (STRIDE) 

• Process for Attack Simulation and Threat Analysis (PASTA)

• Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)

• Trike

• Visual, Agile, and Simple Threat modeling (VAST)

• Common Vulnerability Scoring System (CVSS)

• Attack Trees 

• Persona non grata (PnG) 

• Security Cards 

• Hybrid Threat Modelling Method (hTMM)

• Quantitative Threat Modelling Method (QTMM)

• Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of information, Unawareness, and Non-compliance (LINDDUN)

All of the above methods are designed to detect potential threats, except for CVSS. The number and types of threats will vary considerably between the different methods, as well as the quality and consistency of the methods. Which one is your favorite threat modeling method? Are you interested in using some of the methods above for your company or research project?

Panel Discussion on the topic of Designing IoT Systems

I was invited to participate in a panel discussion at Malmö University on Friday, April 8th. The topic of “Designing IoT Systems” was the one I was asked to speak about. There were representatives from Sony and Sigma Connectivity in the panel with me. Concerns about trustworthiness were a major topic of discussion during the session. 

Safety, security, privacy, reliability, and resilience tend to be identified by several researchers as the main trustworthiness concerns in the IoT domain. These concerns are there to ensure that systems function as intended in a variety of situations.

According to several academics, the most challenging aspects of designing trustworthy IoT systems are achieving privacy and security. From applications to devices, each layer of the Internet of Things has its own set of security risks and potential attacks. From a research perspective, a hot topic is that of building energy-efficient security, along with scalable and dynamic security architectures. Preserving data privacy in the IoT, on the other hand, is also particularly challenging. Existing IoT privacy mechanisms are often built for single services, and not necessarily for interdependent, dynamic, and heterogeneous services. Building new privacy preservation techniques for interdependent services is a hot topic, as is federated learning when it comes to data privacy.

Panel discussion on the topic of “Designing IoT Systems”

Finally, there are a number of standards that pertain to trustworthiness. ISO/IEC 30147 “Integration of trustworthiness in IoT lifecycle processes” and ISO/IEC 30149 “IoT trustworthiness principles” are two ISO/IEC standards.

If you want to collaborate with me or learn more about a specific topic that is related to my research topics, please send me an email.

My Lecture about the IoT and Data Privacy

We live in a world where even brushing our teeth can constitute the transmission of data to servers across the world. One day, we will sleep with smart pillows that will be able to detect our stress levels and send them to an app on our phone. We already wear fitness trackers all day, every day. What does this mean for our privacy? This is what I talked about during my 2-hour guest lecture at Malmö University on December 15.

The Internet of Things (IoT) is all around us, and with it comes an increased risk of privacy and security breaches. In the age of the IoT, we must be cautious about the information we make available to the public or share with shops and manufacturers. We must also consider how businesses may exploit personal data to discriminate against us or charge us extra since they have more knowledge about us thanks to these devices. 

Please feel free to get in touch if you need any information about privacy, security, or related topics.