• News
    • Bitcoin
    • Altcoins
    • Blockchain
    • DeFi
    • Regulation
    • Scams
  • NFT
  • Metaverse
  • Analysis
  • Learn
  • Videos
  • Blogs
  • Market Cap
  • Shop
What's Hot

🚀 SpaceX vs Tesla Share War Begins! ⚡️FSD Solved + Datacenter Shockwave! 🔥

2025-12-11

Who really controls Bitcoin’s price? 🤔

2025-12-11

This Is Worse Than I Thought…

2025-12-11

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

Facebook Twitter Instagram
Crypto Investor News Network
  • News
    • Bitcoin
    • Altcoins
    • Blockchain
    • DeFi
    • Regulation
    • Scams
  • NFT

    Bitcoin Just Hit an All-Time High. Nobody Cares

    2025-09-11

    All Eyes on Art: Upcoming Collections to Watch the Week of May 27

    2025-09-11

    Bitcoin Vegas Belongs to the Suits Now

    2025-09-11

    NFC Summit Lisbon Founder on Evolving the Event and Weathering the Market

    2025-09-11

    All Eyes on Art: Upcoming Collections to Watch the Week of June 3

    2025-09-10
  • Metaverse

    Shib: The Metaverse – Part of the Expanding Shiba Inu Ecosystem

    2025-01-03

    Experience to Earn: Everdome’s Metaverse Frontier

    2024-12-30

    Beyond Bots: Meta Motivo and the Dawn of Humanlike Digital Life

    2024-12-13

    Exploring NetVRk: What Is Behind This AI-Driven Virtual Universe?

    2024-10-28

    Council of Europe Highlights Metaverse’s Impact on Privacy and Democracy

    2024-09-05
  • Analysis

    Crypto Exchange Coinbase Lists New DeFi Altcoin Project Built on Base Blockchain

    2023-12-13

    Ethereum Price Bears Keep Pushing, Why Decline Isn’t Over Yet

    2023-12-13

    Trader Bullish on Cosmos (ATOM), Says One Dogecoin Rival Setting Up for Next Leg Up – Here’s His Outlook

    2023-12-13

    AVAX Price Pumps 50% and Dumps 15%, Why Uptrend Is Still Strong

    2023-12-13

    Top Trader Predicts Parabolic Rally for Solana Competitor – Here’s His Upside Target

    2023-12-13
  • Learn

    Win Tangem Rings and Cards, OneKey Wallets, and a MacBook Pro

    2025-12-09

    What Is a Crypto Wallet Address? A Beginner’s Guide

    2025-12-06

    What Is a Seed Phrase in Crypto? A Beginner’s Guide to Wallet Security

    2025-12-05

    Practical Strategies for New Traders

    2025-12-05

    Types of Cryptocurrency Explained Simply: Coins, Tokens, and Beyond

    2025-12-05
  • Videos

    🚀 SpaceX vs Tesla Share War Begins! ⚡️FSD Solved + Datacenter Shockwave! 🔥

    2025-12-11

    Who really controls Bitcoin’s price? 🤔

    2025-12-11

    This Is Worse Than I Thought…

    2025-12-11

    Bitcoin’s Next HUGE Move After Fed Decision

    2025-12-11

    Ben Cowen: 2025 Review & What’s Coming for Crypto in 2026

    2025-12-11
  • Blogs
  • Market Cap
  • Shop
Facebook Twitter Instagram TikTok
Crypto Investor News Network
Home»NFT»Experts Explain Why It’s Difficult (But Important) to Get Right
NFT

Experts Explain Why It’s Difficult (But Important) to Get Right

2023-07-26No Comments9 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email

From fake photos of Donald Trump being arrested by New York City police officers to a chatbot describing a very-much-alive computer scientist as having died tragically, the ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images is setting off alarms about fraud and misinformation on steroids. Indeed, a group of artificial intelligence researchers and industry figures urged the industry on March 22, 2023, to pause further training of the latest AI technologies or, barring that, for governments to “impose a moratorium.”

These technologies – image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA – are now available to millions of people and don’t require technical knowledge to use.

Given the potential for widespread harm as technology companies roll out these AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three experts on technology policy to explain why regulating AI is such a challenge – and why it’s so important to get it right.


Human foibles and a moving target

S. Shyam Sundar

The reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion. Gushing media coverage has fueled irrational beliefs about AI’s abilities and consciousness. Such beliefs build on “automation bias” or the tendency to let your guard down when machines are performing a task. An example is reduced vigilance among pilots when their aircraft is flying on autopilot.

Numerous studies in my lab have shown that when a machine, rather than a human, is identified as a source of interaction, it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible, and so on. It clouds the user’s judgment and results in the user overly trusting machines. However, simply disabusing people of AI’s infallibility is not sufficient, because humans are known to unconsciously assume competence even when the technology doesn’t warrant it.

Research has also shown that people treat computers as social beings when the machines show even the slightest hint of humanness, such as the use of conversational language. In these cases, people apply social rules of human interaction, such as politeness and reciprocity. So, when computers seem sentient, people tend to trust them, blindly. Regulation is needed to ensure that AI products deserve this trust and don’t exploit it.

AI poses a unique challenge because, unlike in traditional engineering systems, designers cannot be sure how AI systems will behave. When a traditional automobile was shipped out of the factory, engineers knew exactly how it would function. But with self-driving cars, the engineers can never be sure how they will perform in novel situations.

See also  Top Altcoins Set for a Massive Breakout This Week, Experts Reveal Potential Levels

Lately, thousands of people around the world have been marveling at what large generative AI models like GPT-4 and DALL-E 2 produce in response to their prompts. None of the engineers involved in developing these AI models could tell you exactly what the models will produce. To complicate matters, such models change and evolve with more and more interaction.

All this means there is plenty of potential for misfires. Therefore, a lot depends on how AI systems are deployed and what provisions for recourse are in place when human sensibilities or welfare are hurt. AI is more of an infrastructure, like a freeway. You can design it to shape human behaviors in the collective, but you will need mechanisms for tackling abuses, such as speeding, and unpredictable occurrences, like accidents.

AI developers will also need to be inordinately creative in envisioning ways that the system might behave and try to anticipate potential violations of social standards and responsibilities. This means there is a need for regulatory or governance frameworks that rely on periodic audits and policing of AI’s outcomes and products, though I believe that these frameworks should also recognize that the systems’ designers cannot always be held accountable for mishaps.


Combining ‘soft’ and ‘hard’ approaches

Cason Schmit

Regulating AI is tricky. To regulate AI well, you must first define AI and understand anticipated AI risks and benefits. Legally defining AI is important to identify what is subject to the law. But AI technologies are still evolving, so it is hard to pin down a stable legal definition.

Understanding the risks and benefits of AI is also important. Good regulations should maximize public benefits while minimizing risks. However, AI applications are still emerging, so it is difficult to know or predict what future risks or benefits might be. These kinds of unknowns make emerging technologies like AI extremely difficult to regulate with traditional laws and regulations.

Lawmakers are often too slow to adapt to the rapidly changing technological environment. Some new laws are obsolete by the time they are enacted or even introduced. Without new laws, regulators have to use old laws to address new problems. Sometimes this leads to legal barriers for social benefits or legal loopholes for harmful conduct.

“Soft laws” are the alternative to traditional “hard law” approaches of legislation intended to prevent specific violations. In the soft law approach, a private organization sets rules or standards for industry members. These can change more rapidly than traditional lawmaking. This makes soft laws promising for emerging technologies because they can adapt quickly to new applications and risks. However, soft laws can mean soft enforcement.

See also  Experts Predict $36,000 Target As Market Outlook Remains Positive

Megan Doerr, Jennifer Wagner, and I propose a third way: Copyleft AI with Trusted Enforcement (CAITE). This approach combines two very different concepts in intellectual property — copyleft licensing and patent trolls.

Copyleft licensing allows for content to be used, reused, or modified easily under the terms of a license – for example, open-source software. The CAITE model uses copyleft licenses to require AI users to follow specific ethical guidelines, such as transparent assessments of the impact of bias.

In our model, these licenses also transfer the legal right to enforce license violations to a trusted third party. This creates an enforcement entity that exists solely to enforce ethical AI standards and can be funded in part by fines from unethical conduct. This entity is like a patent troll in that it is private rather than governmental and it supports itself by enforcing the legal intellectual property rights that it collects from others. In this case, rather than enforcement for profit, the entity enforces the ethical guidelines defined in the licenses — a “troll for good.”

This model is flexible and adaptable to meet the needs of a changing AI environment. It also enables substantial enforcement options like a traditional government regulator. In this way, it combines the best elements of hard and soft law approaches to meet the unique challenges of AI.


Four key questions to ask

John Villasenor

The extraordinary recent advances in large language model-based generative AI are spurring calls to create new AI-specific regulation. Here are four key questions to ask as that dialogue progresses:

1) Is new AI-specific regulation necessary? Many of the potentially problematic outcomes from AI systems are already addressed by existing frameworks. If an AI algorithm used by a bank to evaluate loan applications leads to racially discriminatory loan decisions, that would violate the Fair Housing Act. If the AI software in a driverless car causes an accident, products liability law provides a framework for pursuing remedies.

2) What are the risks of regulating a rapidly changing technology based on a snapshot of time? A classic example of this is the Stored Communications Act, which was enacted in 1986 to address then-novel digital communication technologies like email. In enacting the SCA, Congress provided substantially less privacy protection for emails more than 180 days old.

See also  Fracton Protocol Unveils hiBAKC, the Latest Fractional NFT on KuCoin

The logic was that limited storage space meant that people were constantly cleaning out their inboxes by deleting older messages to make room for new ones. As a result, messages stored for more than 180 days were deemed less important from a privacy standpoint. It’s not clear that this logic ever made sense, and it certainly doesn’t make sense in the 2020s, when the majority of our emails and other stored digital communications are older than six months.

A common rejoinder to concerns about regulating technology based on a single snapshot in time is this: If a law or regulation becomes outdated, update it. But this is easier said than done. Most people agree that the SCA became outdated decades ago. But because Congress hasn’t been able to agree on specifically how to revise the 180-day provision, it’s still on the books over a third of a century after its enactment.

3) What are the potential unintended consequences? The Allow States and Victims to Fight Online Sex Trafficking Act of 2017 was a law passed in 2018 that revised Section 230 of the Communications Decency Act with the goal of combating sex trafficking. While there’s little evidence that it has reduced sex trafficking, it has had a hugely problematic impact on a different group of people: sex workers who used to rely on the websites knocked offline by FOSTA-SESTA to exchange information about dangerous clients. This example shows the importance of taking a broad look at the potential effects of proposed regulations.

4) What are the economic and geopolitical implications? If regulators in the United States act to intentionally slow the progress in AI, that will simply push investment and innovation — and the resulting job creation — elsewhere. While emerging AI raises many concerns, it also promises to bring enormous benefits in areas including education, medicine, manufacturing, transportation safety, agriculture, weather forecasting, access to legal services, and more.

I believe AI regulations drafted with the above four questions in mind will be more likely to successfully address the potential harms of AI while also ensuring access to its benefits.


This article is republished from The Conversation under a Creative Commons license. Read the original article by S. Shyam Sundar, James P. Jimirro Professor of Media Effects, Co-Director, Media Effects Research Laboratory, & Director, Center for Socially Responsible AI, Penn State; Cason Schmit, Assistant Professor of Public Health, Texas A&M University, and John Villasenor, Professor of Electrical Engineering, Law, Public Policy, and Management, University of California, Los Angeles.

difficult Experts Explain Important
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Bitcoin Just Hit an All-Time High. Nobody Cares

2025-09-11

All Eyes on Art: Upcoming Collections to Watch the Week of May 27

2025-09-11

Bitcoin Vegas Belongs to the Suits Now

2025-09-11

NFC Summit Lisbon Founder on Evolving the Event and Weathering the Market

2025-09-11
Add A Comment

Leave A Reply Cancel Reply

Top Posts
Altcoins

Cardano Awaits Crucial Support for Recovery Rally- Here’s the Next Level For ADA Price to Watch Out

2023-06-19

Amid the altcoin crash, the price of Cardano (ADA) has been a hot topic of…

NFT

How Zeneca Built a Community by Doubling Down on Education

2023-08-29

Everyone remembers the person who introduced them to Web3. Web3 builder and advisor Zeneca has…

Videos

NEW: Central Banks Just Revealed SHOCKING Move Into Bitcoin!

2024-08-16

LIVE: Shocking Bitcoin Developments! Goldman Sacks, Central Banks, Keiser, Harris & More! Join us in…

Subscribe to Updates

Get the latest news and Update from CINN about Crypto, Metaverse and NFT.

Editors Picks

🚀 SpaceX vs Tesla Share War Begins! ⚡️FSD Solved + Datacenter Shockwave! 🔥

2025-12-11

Who really controls Bitcoin’s price? 🤔

2025-12-11

This Is Worse Than I Thought…

2025-12-11

Bitcoin’s Next HUGE Move After Fed Decision

2025-12-11
Crypto Investor News Network
Facebook Twitter Instagram TikTok
  • Contact
  • Terms & Conditions
  • Privacy Policy
  • DMCA
  • Disclouser
© 2025 - All rights are reserved.

Type above and press Enter to search. Press Esc to cancel.

bitcoin
Bitcoin (BTC) $ 0.00000000000000
ethereum
Ethereum (ETH) $ 0.00000000000000
tether
Tether (USDT) $ 0.00000000000000
xrp
XRP (XRP) $ 0.00000000000000
bnb
BNB (BNB) $ 0.00000000000000
usd-coin
USDC (USDC) $ 0.00000000000000
solana
Wrapped SOL (SOL) $ 0.00000000000000
staked-ether
Lido Staked Ether (STETH) $ 0.00000000000000
tron
TRON (TRX) $ 0.00000000000000
dogecoin
Dogecoin (DOGE) $ 0.00000000000000