True Privacy in the Digital Age - Is it real and necessary!

True Privacy in the Digital Age - Is it real and necessary!

By: Rachna Narem, Senior Partner, Director, EightBar

Rachna Narem is a versatile marketing professional with deep knowledge of analytics and strong client relationships throughout North America, Asia Pacific, Europe, and Latin America. She leverages 18 years of hands-on modeling and analytics experience to provide strategic consulting and leadership to a variety of clients, translating insights into short-term tactical and long-term strategic recommendations for continued growth.

When it comes to digital data — photos, conversations, health information or finances — nothing can be perfectly private.

Streams of data from mobile phones and other online devices expand the volume, variety, and velocity of information about every facet of our lives and put privacy into the spotlight as a global public policy issue.

Artificial Intelligence (AI) accelerates this trend. AI has made considerable contributions to every aspect of our lives. The gigantic information that we are gathering and analyzing today with the help of AI tools is supporting us to tackle numerous societal ills that were hard to resolve earlier. Unfortunately, just like many other technologies, AI brings with it a fair share of drawbacks as it can harm us in several ways.

As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.

When you are using technology like AI, most of the time you are unknowingly or unwillingly revealing your private data like age, location, and preferences, etc. The tracking companies collect your private data, analyze it, and then employ it to customize your online experience.These tracking companies can also sell your private information data to other entities without your knowledge or consent.

The enormous data that companies feed into AI-driven algorithms are susceptible to data breaches as well.  AI may generate personal data that has been created without the permission of the individual.And for those entities charged with keeping and protecting people’s data, including governments and big tech companies, what’s best for consumer privacy may not always be in line with their own priorities.

Machine Learning (ML) algorithms can easily infer sensitive information from insensitive data. For example, based on a person's typing pattern on the keyboard, AI can predict his emotional states such as anxiety, nervousness, and confidence, etc.

Similarly, facial recognition tool is also invading our privacy. With the benefit of rich databases of digital photographs available via social media, websites, driver’s license registries, surveillance cameras, and many other sources, machine recognition of faces has progressed rapidly from fuzzy images of cats to rapid (though still imperfect) recognition of individual humans. Facial recognition systems are being deployed in cities and airports around America.

Customers have always been focused on their private information. However, at present, organizations are using Big Data Analytics to take some actions that may breach the privacy of their consumers.

A survey of 5000 people from different countries,revealed that 63% of respondents prefer privacy over the customer experience and want companies to avoid using AI in case it invades their privacy, no matter how delightful customer experience it is delivering. About 71% of the respondents showed apprehensions that AI would be even making their main decisions without even their consent or knowledge.

No doubt, the private data of users helps AI systems to perform the desired tasks in a better way, but this private data collection is not without associated risks. One of these risks is using the private data of the users for non-intended purposes, about which users don't even know how it will be processed, where it will be used, or even sold. This private data in general is used in programmatic advertisements to make people purchase a product. However, this private data can also be manipulated by AI systems and robots. In 2018, the scandal of Cambridge Analytics revealed how personal data collected through Facebook can help to manipulate elections.

The truth is there’s no silver bullet, there’s no foolproof way to protect your privacy and data security, but there are plenty of basic, important steps companies and government can take to reduce their risk

  • Companies must avoid stockpiling each bit of available data and limit themselves to store just fewer data points.
  • Companies must adopt technical approaches likePrivacy-Preserving ML, a subfield of ML, federated learninghomomorphic encryption, and differential privacy  to reduce a few of the privacy risks inherited by AI and ML, but these are also at their infant stage with several shortcomings.
  • To avoid privacy violations, different laws are being implemented at local, state, and federal levels. The objective behind these laws is to ensure that privacy is an obligatory concern of compliance management like CCPA (California Consumer Privacy act), GDPR (General Data Protection Regulation – Law in the European union), HIIPA (Health Insurance Portability and Accountability Act).
  • Companies should move to using zero- and first-party data and slowly move away from second- and third-party data

First-party data will help build trust with people who visit your website and use your products. Data privacy regulations and third-party cookies going away are only going to make first-party data even more important.

 Consumers "care about data privacy" and want more transparency and are willing to share their personal information with a company in exchange for more personalized shopping experiences, but companies need to build that brand trust first and ensure that the information will be kept secure.