Home » Why the U.S. is Falling Behind Europe in AI Privacy Violations: Experts
Europe Featured Global News News Technology

Why the U.S. is Falling Behind Europe in AI Privacy Violations: Experts


Regulators and law enforcement agencies in Europe and other parts of the world have become increasingly concerned about the growing risks to their citizens from the rapid evolution of generative artificial intelligence (AI) – particularly platforms like OpenAI’s ChatGPT.

ChatGPT and other chatbots have racked up millions of users over the past 18 months, with few guardrails in place to protect sensitive personal information. Authorities are looking for ways to slow the precipitous rush into an unknown digital future.

Last year, the European Union Agency for Law Enforcement Cooperation (Europol) warned that large language models such as ChatGPT can make it easier for criminals to engage in fraud, impersonation, and social engineering, spread phishing acts, create malware and engage in terrorist acts.

Which is quite a lot of power to hand to bad actors.

Around the same time, Italy temporarily banned ChatGPT after a glitch exposed user data files. The Italian Data Protection Authority threatened OpenAI with millions of dollars in fines for privacy violations until provided clarity on where users’ information goes.

These concerns contributed to the European Parliament approving the Artificial Intelligence Act — the world’s first legal framework governing AI — which aims to ensure that AI-based systems are safe and “respect fundamental rights, safety, and ethical principles” while supporting innovation. The regulation was agreed in negotiations with member states in December 2023.

The Act is part of a wider package of European policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI. The law will come into full effect within two years, giving organizations time to ensure their compliance.

Key Takeaways

  • Concerns about the risks of AI have led to regulatory action in Europe via the EU’s Artificial Intelligence Act.
  • In contrast, the U.S. lacks comprehensive AI regulation, leading to a patchwork of state laws.
  • Experts from Ernst and Young, Protegrity, and Duality Technologies explain the differences, risks, and potential economic hits without clear guidelines.
  • However, recent executive orders signal a growing interest in AI governance at the federal level.
  • New technology, such as Privacy-Enhancing Technologies (PETs), may offer a solution for data privacy and security, and companies like Mastercard are exploring the potential.

U.S. Lags Europe’s Common AI and Privacy Regulation

While the EU has the GDPR to safeguard data privacy, and now the AI Act specifically focuses on safety issues, the U.S. does not have comprehensive legislation to deal with the intersection of AI, privacy, and market dynamics.

While lawmakers have this week proposed a federal American Privacy Rights Act, it would need to pass both houses of Congress and would have a slim chance of receiving the President’s signature in an election year.

“The United States federal government has not done the job necessary to align our states and territories to a common framework,” Nathan Vega, Vice President of Product Marketing and Strategy, Protegrity, told Techopedia.

“A key challenge arises from a fundamental misalignment in how the states and government view data about citizens that GDPR solved.

“The GDPR, at its core, extends the rights of citizens in the EU to the data about them. In effect, the data about an individual is their data, and citizens are empowered to determine who uses their data and for what purposes.

“The impact of this for businesses operating in the EU is they must have clear privacy policies, transparency about the data that is captured, and prescriptive about how they will use a citizen’s data.”

In the absence of a nationwide privacy law, various states have taken on the work of developing individual privacy regulations with varying levels of enforcement.

Vega added:

“The global implication from a lack of holistic federal privacy law in the U.S. is greater costs and complexity for participating in the U.S. market.”

Back in 2019, the Business Roundtable, an association of chief executive officers (CEOs) of America’s leading companies urged Congress to “urgently” pass “a comprehensive federal consumer data privacy law to strengthen consumer trust and establish a stable policy environment.”

Vega noted: “The reason that 51 tech leaders from the largest companies on the planet signed a petition to have a national privacy law might have included the goodness in their hearts, but it was about costs…

“What we’re going to see, which is already happening, is businesses exiting markets that are not worth the value they can capture. It is not hard to imagine a large global brand determining it won’t operate in the U.S. or limited parts of the U.S. because there’s not enough value to continue operating nationally.”

John D. Hallmark, U.S. Political and Legislative Leader at consulting firm Ernst and Young, told Techopedia that the U.S. approach to regulating AI is “deliberate” rather than delayed “…With U.S. policymakers intensely engaged in learning about the technology, understanding its potential and developing governance frameworks that reflect the fast-paced nature of this sector. This approach is reflected in last year’s executive order on AI.

“This past weekend, data privacy leaders in Congress announced a new proposal on data privacy, so while negotiations will certainly continue, the U.S. is not ignoring this issue,” Hallmark said.

“Meanwhile, other governments, including the EU and various U.S. states like California, are writing their own rules on data privacy. The lack of a comprehensive U.S. federal law on data privacy does not mean that there are no rules at all, especially when so many large tech companies have a global footprint. These organizations will also themselves need to develop their strategic approach to risk management.”

Ronen Cohen, VP of Strategy at Duality Technologies, told Techopedia that while the U.S. has historically lagged behind the rest of the world in terms of data privacy protections, it is closing the gap.

President Biden’s Executive Order on AI last October gave clear direction on privacy and confidentiality, and in February, the White House issued another Executive Order on Preventing Access to Americans’ Bulk Sensitive Personal Data.

Cohen added:

“That being said, I think a federal privacy law in the USA is not in the cards in the near term. This, of course, has ramifications for our globalized economies, where data and insights need to flow as freely as possible.

“Because of this, and because much of the world already has laws in place around data privacy, security, and sovereignty, there is an opportunity to leverage technologies like Privacy Enhancing Technologies (PETs), which can bridge these gaps, enabling companies to meet or surpass the privacy requirements of any given jurisdiction while still being able to leverage data, build models, and utilize AI/ML.”

The Potential Role of Privacy-Enhancing Technologies

Privacy-Enhancing Technologies (PETs) are a group of software and hardware tools that organizations can use to minimize the personal information about a customer that they collect and use while maximizing their security safeguards.

PETs can be effective for secure AI model training, generating anonymous statistics, and sharing protected data with third parties that would otherwise be too sensitive to share. PETs can provide assurance to users in the public and private sectors that they can safely train AI models on sensitive information and collaborate with peers to derive new insights while safeguarding customer data.

According to the UK Information Commissioner’s Office (ICO), PETs include:

  • Differential privacy — a mathematical method of generating anonymous statistics in datasets by randomizing the computation process that adds noise to the output.
  • Synthetic data provides realistic datasets if access to large datasets containing real information is not possible.
  • Homomorphic encryption, which enables computations on encrypted data without first decrypting it.
  • Zero-knowledge proofs (ZKP), a data minimisation method increasingly used in blockchain networks that enables an individual to prove the accuracy of private information without actually revealing the information.
  • Trusted execution environments enable a secure part of a computer processor — isolated from the main operating system and other applications — to process data.
  • Secure multiparty computation (SMPC) allows different parties to jointly process combined data, without any party needing to share all of its information with the other parties.
  • Federated learning trains machine learning (ML) models in distributed settings while minimising the amount of personal information that each party shares. Using federated learning alone may not be sufficient to completely protect personal information, so it may need to be combined with other PETs at different stages of data processing.

PETs in Action

A case study by Singapore’s Infocomm Media Development Authority (IMDA) illustrates how card payment network Mastercard has been simulating the use of PETs to help combat financial crimes such as money laundering by collaborating with third parties across several jurisdictions while complying with data privacy, security, and financial regulations.

Mastercard developed a proof of concept in IMDA’s PET Sandbox program to investigate a Fully Homomorphic Encryption (FHE) product provided by a third-party supplier to share financial crime intelligence with the US, India, and UK while complying with privacy and cross-border data protection regulations.

Mastercard concluded that the use of an application programming interface (API) on FHE technology holds promise, although existing regulations may need to be updated to accommodate how the process is managed and source data maintained.

Cohen said the use of PETs “has downstream benefits like maintaining national security in the public sector and driving new products, services, revenues, and efficiencies in the private sector.”

“By prioritizing such technologies, which enable a privacy-by-design approach to utilizing data and AI models, we can all unlock AI’s full potential while upholding our fundamental civil and privacy rights and ensuring that AI serves as a force for good.”

The Bottom Line

The U.S. has historically lagged behind regions such as Europe in implementing robust, widespread regulations to manage AI’s impact on citizens’ privacy and safety. But lawmakers are taking steps to introduce legislation, and the White House has indicated a clear interest in providing guidance that aims to safeguard consumer privacy in the age of AI.

Whether there is national regulation in the U.S. or not, international companies that operate in jurisdictions outside the U.S. will need to comply with the tougher requirements elsewhere. Privacy Enhancing Technologies (PETs) can help companies comply with the disparate privacy requirements of any jurisdiction while still being able to leverage data, build training models, and use AI and ML applications.

Source: Techopedia

Translate

Herald

Ohio Miner is a global leader in the online news. We seek to inform and engage with our readers. Staffed 24 hours, seven days a week by a dedicated team around the globe, we deliver news from journalists around the world. We are contrarian truth-seekers and truthtellers. We are journalists united by a mission to inform and engage with our readers.

We bear witness to history as it unfolds and explain not just what happened, why it happened and what it means to our readers and the public.

We are contrarian, we are committed to the news, speaking truth to power.