AI, Law and Ownership: What DABUS, Deepfakes and Data Scraping Teach Us
- rizmughal
- Oct 2
- 4 min read

Artificial Intelligence isn’t just writing poems or generating logos anymore. It’s testing the very edge of our legal system. From the DABUS Supreme Court case in the UK, to copyright fights in the US, to sweeping new AI laws in the EU and China, we’re in dangerously uncharted territory. The big question: who really owns AI creations, and how do we regulate machines that don’t care about the rules?
This isn’t just theory. It’s about whether companies like OpenAI, Anthropic, and upstarts like DeepSeek can operate freely, whether creatives can protect their work, and whether society can trust what it sees online.
DABUS: The AI Inventor Courts Don’t Recognise
The DABUS saga is legendary. Dr Stephen Thaler built an AI system, DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) that independently designed new inventions, including a futuristic food container. Thaler argued DABUS should be listed as the “inventor” on the patent.
The UK Supreme Court said no. Only humans can be inventors. Period.
The reasoning was simple: the law was written with humans in mind. Without a human inventor, the system breaks. But let’s be honest; this feels more like kicking the can down the road. AI can create. Pretending otherwise just avoids the messy policy questions about ownership, incentives, and liability. Was DABUS, dare I say, a lazy decision?
US Copyright: Human Authorship is “Bedrock”
Meanwhile, across the Atlantic, the US Copyright Office has drawn a hard line. Works generated by AI, whether from ChatGPT, Claude, or DeepSeek don’t get copyright unless there’s a clear human hand guiding them.
One test case involved Zarya of the Dawn, a graphic novel with AI-generated images. The human author was denied copyright for the pictures, because the AI did too much of the creative heavy lifting. The US Copyright Office summed it up: human authorship is a bedrock requirement.
That leaves a strange gap. If an AI creates something new, is it just public domain? Or do companies quietly capture ownership through terms of service (the fine print most people never read)?
The Data Scraping Elephant in the Room
Before we even get to outputs, let’s talk about inputs. ChatGPT, Claude, DeepSeek, and many others that exist only exist due to scraping the internet. Millions of books, articles, images, and code snippets fed into their training.
This is where litigation is heating up:
Getty Images v Stability AI: Getty accuses Stablility of “brazen” copyright infringement on a massive scale.
Artists v MidJourney, DeviantArt & Stability AI: a class action claiming wholesale theft of artistic styles.
Programmers v GitHub Copilot: alleging code was scraped without consent.
The legal angles range from breach of contract (ignoring website terms of use) to copyright and database rights. Add GDPR into the mix, especially around personal data date-scraping, and the foundations of today’s AI models look shaky.
Global Patchwork: Four Countries, Four Different Answers
One thing is clear: there’s no single codified “AI law.” It's rarely the case, and the same is true for many niche practice areas of law. Instead, the world is a messy patchwork:
US: Relies on fair use. Courts ask whether training or output is transformative and harms the market.
China: Recently, a court ruled an AI-generated picture was protected because the human prompts showed “aesthetic judgment". This sounds like a sensible approach to me.
Japan & Singapore: Both explicitly allow computational data analysis for training purposes.
UK: Toying with an “opt-out” for text and data mining, but creative industries are fiercely opposed to this.
This fragmentation creates opportunities for jurisdiction shopping. Train in a lenient country, sell everywhere else.
Deepfakes, Compliance and the Reality of Bad Actors
Here’s the brutal truth: regulations only work on people who play by the rules.
The EU AI Act requires deepfakes to be labelled. The White House has ordered watermarking of AI outputs. China has imposed rules on providers to stop fake or harmful content spreading.
But ask yourself (and I wonder whether the legislators did this exercise): is a scammer really going to watermark their fake political video, or fake FOREX platform..? Of course not, that's absurd. All bad actors, and in fact the majority of average users, don't want you to know their output was AI-generated!
The risk is a two-tier AI world:
Compliant players (OpenAI, Anthropic, DeepSeek) Ethical, large, organisations jumping through regulatory hoops, potentially stifling innovation and widening the gap between start-ups and gigantic organisations with massive funding.
Rogue operators generating unchecked, unlabelled content with zero accountability.
That means regulation alone won’t save us. We need smarter detection tools, digital literacy, and frankly, a dose of scepticism in how we consume media.
Why Companies Can’t Sit Back
While governments argue, businesses don’t have the luxury of waiting. Every organisation dabbling in AI, whether in HR, marketing, legal, or product development needs AI governance baked in.
That means:
AI Usage Policies: what tools are allowed, what for, and with whose approval, taking GDPR and the firm's privacy policy into consideration.
AI Impact Statements: like environmental impact reports, but for AI. How is this tool affecting customers, compliance, and employees?
Algorithmic Audits: mapping what AI systems are in use and their risks.
Transparency Statements: plain-English explanations of how AI is used and why.
Training: not just for engineers, but boards and staff, so they know the risks of plugging sensitive data into ChatGPT or Claude.
And here’s the kicker: these can’t be “one and done.” Policies must be living, breathing, documents. AI is evolving faster than regulation, so if your company updates policies once a year, you’re already behind.
My Take
The DABUS case and US copyright fights tell us one thing: courts are still clinging to human authorship. The EU AI Act, US executive orders, and China’s strict rules tell us regulators are scrambling to keep up. But none of this touches the bad actors who will never label their deepfakes or log their training data.
For me, the real opportunity isn’t just in regulation. It's in companies stepping up. If businesses adopt robust AI governance, update policies, and publish impact statements, they won’t just avoid lawsuits. They’ll build trust, reputation, and a competitive edge.
ChatGPT, Claude, DeepSeek, and every model that comes after them will keep creating, enhancing, and improving. The question is whether our laws, businesses, and ethics can catch up and keep up.



Comments