by: PJ Chandra
OpenAI’s new text-to-video system, Sora, represents one of the most significant steps in generative AI to date. It can produce realistic video scenes, copy artistic styles, and generate short films from just a few lines of text.[1] The tool’s public debut caused an immediate cultural reaction including an entire South Park episode, which satirized students misusing AI video in its Season 28 episode 3 titled “Sora Not Sorry.”[2] When a mainstream animated series such as South Park responds in real time, it shows how deeply AI has entered public discourse. However, beyond the humor, Sora raises a much larger question: who gets paid, protected, and visible in the next era of digital creativity, and who gets pushed out?
Background: What Sora Can Do and Why it Raises Legal Flags
Sora can fabricate entire environments, animate realistic characters, imitate the visual style of famous directors, and even generate videos that resemble real individuals.[3] Those features immediately implicate federal copyright law, which protects “original works of authorship” under 17 U.S.C. § 101-102.[4] If a Sora generated video copies the look and feel of a copyrighted work or incorporates protected story elements, existing copyright doctrines including substantial similarity, derivative works, and fair use also come into play. It also raises right of publicity concerns. The model can plausibly reproduce a person’s face, voice, or persona, potentially violating statutes such as Fla. Stat. § 540.08 or Cal. Civ. Code § 3344.[5] Both of these statutes prohibit using a person’s name, likeness, or voice or any other identifying features for advertising purposes without consent. Courts have held that misappropriating a person’s likeness for commercial purposes can trigger liability.[6] Finally, because AI impersonations have been shown to deceive consumers, the Federal Trade Commission has warned that synthetic media may amount to unfair or deceptive practices under § 5 of the FTC Act, particularly when used to manipulate, impersonate, or defraud.[7] In other words, Sora is at the intersection of multiple legal regimes, none of which were designed with AI video in mind.
Economic Justice Problem #1: Devaluation of Creative Labor
Sora makes it possible to generate polished videos in minutes. That speed has consequences for humans whose work it displaces. Creative professionals including animators, video editors, illustrators, actors, background performers, and early career freelance artists face downward wage pressures as AI becomes a cheaper alternative. Economic analysts already predict that automation will disproportionately harm entry-level workers in creative fields.[8] Even more troubling, AI-generated video lacks “human authorship” and therefore is not eligible for copyright protection.[9] That means that human creators whose styles or likenesses are absorbed into training datasets may have their artistic influence replicated without compensation.
Labor negotiations are evolving in response. Recent SAG-AFTRA and WGA collective bargaining agreements restrict studios from scanning and digitally reproducing performers without consent or payment.[10] There have been disputes over background actors being asked to submit full-body scans showing how fragile creative labor markets are in the wake of AI tools.[11] This shows that AI is already changing hiring practices, budget decisions, and staffing levels.
South Park captured this anxiety through its parody of AI-generated voices of Eric Cartman, one of the main characters from the show. While it was comedic in nature, the show did mirror real world fear that the human creative identity will be replaced by automated versions that do not pay, credit, or respect the original source. Without any targeted protections, Sora can accelerate the commercialization of creative labor, increasing economic insecurity for workers already facing uncertainty.
Economic Justice Problem #2: Platform Power and Algorithmic Inequality
Generative AI does not just change how content is made but it also changes who benefits the most from the attention economy. Due to the fact that Sora enables rapid mass production of video, creators, or studios with access to AI can flood platforms with hundreds of videos at a time. Algorithms tend to reward volume, velocity, and engagement metrics. As a result, AI content can bury the work of independent creators. Social platforms are largely shielded from liability for such decisions because of Section 230.[12] Section 230 generally shields online platforms from liability for user-generated content, even when the platform moderates, recommends, or organizes that content. This dynamic concentrates economic power in the hands of platforms. Platforms already control who gets monetized, recommended, or suppressed. As the amount of AI content increases, the share of revenue that reaches small creators will decrease while platform profits grow.[13] Algorithmic ranking systems often reinforce existing inequalities and intersect with broader patterns of discrimination.[14] Federal courts have increasingly acknowledged that algorithmic tools can discriminate against applicants based on protected characteristics by relying on biased training data and using proxies correlated with race, age, and disability.[15] When AI-generated videos flood online spaces, discoverability becomes even harder and worsens inequities. In South Park, adults in the episode including the principal of the school repeatedly fell for manipulated AI videos from the kids, showing the satire where people cannot distinguish real from AI-generated media. That confusion has real economic implications: audiences lose trust in creators, advertisers hesitate, and platform recommendation systems shift more to those who are bigger creators in the space.
Economic Justice Problem #3: Misinformation and Eroding Trust
Sora also magnifies the problem of online misinformation. AI videos hurt the ability of journalists and legitimate creators to establish reliability with their audiences.[16] High-quality deepfakes can spread faster than corrections, leaving people with lower digital literacy especially vulnerable.[17] However, states are beginning to respond. For example, Texas does not allow for election-related deepfakes meant to trick voters.[18] The First Amendment does limit broad regulation of false speech, as shown in the case United States v. Alvarez, where the Court held that even knowingly false statements receive constitutional protection unless linked to specific harms such as fraud or defamation.[19] The reality of this is that it makes aggressive restrictions on AI-generated misinformation difficult. Researchers have warned that digital literacy gaps are amplified with this vulnerability.[20] For creators and those who are trying to build trust online, operating in an environment saturated with AI-powered misinformation creates barriers to visibility as well as audience retention.
Policy Pathways: A Fairer AI Creative Economy
To avoid an AI-driven concentration of power, policymakers and platforms must establish fairness and there are several solutions that stand out. First, the FTC can require clearer transparency obligations including watermarking or provenance metadata for AI-generated videos.[21] Second, state legislatures can update right of publicity laws to explicitly cover AI impersonation and require affirmative consent for digital likeness use.[22] Third, the government can support independent creators through grants, tax credits, and subsidies to access ethical AI tools, making sure that power does not go only to those with the largest budgets. Copyright law could also create an opt-in licensing model which enables artists to receive compensation when their works are used as training data. Platforms themselves should be required to issue transparency reports explaining how algorithms rank generated AI video versus human created work. These reforms would not slow innovation, but they would ensure that innovation benefits more than a handful of large technology firms.
Conclusion: The Stakes of Sora
Sora is a new creative tool that signals a fundamental shift in the economics of cultural production. South Park may have captured the cultural anxiety first, but law and policy must also deal with the real-world impacts: labor displacement, platform concentration, misinformation, and unequal access to AI resources. With thoughtful legal safeguards, Sora could empower creativity and expand opportunity. Without intervention, AI risks deepening existing inequities and eroding the economic foundations of creative work. The future of digital culture depends on how we regulate and share the benefits of tools like Sora.
[1] OpenAI, Introducing Sora (Feb. 15, 2024), https://openai.com/sora (last visited Nov. 19, 2025).
[2] South Park: Sora Not Sorry (Comedy Central television broadcast Nov. 6, 2025).
[3] OpenAI, Sora Documentation, supra note 1.
[4] 17 U.S.C. §§ 101–102.
[5] Fla. Stat. § 540.08; Cal. Civ. Code § 3344.
[6] Zacchini v. Scripps-Howard Broad. Co., 433 U.S. 562 (1977).
[7] Fed. Trade Comm’n, Guidance on AI Impersonation and Synthetic Media (Jan. 17, 2025), https://www.ftc.gov/system/files/ftc_gov/pdf/ai-accomplishments-1.17.25.pdf.
[8] Josh Bivens & Ben Zipperer, Unbalanced Labor Market Power Is What Makes Technology- Including AI – Threatening to Workers: The Best “AI Policy” to Protect Workers Is Boosting Their Bargaining Position, Econ. Pol’y Inst. (Mar. 28, 2024), https://www.epi.org/publication/ai-unbalanced-labor-markets/.
[9] 88 Fed. Reg. 16,190 (Mar. 16, 2023).
[10] Artificial Intelligence, SAG-AFTRA (last visited Nov. 20 2025), https://www.sagaftra.org/contracts-industry-resources/member-resources/artificial-intelligence.
[11] Bobby Allyn, “Movie Extras Worry They’ll Be Replaced by Artificial Intelligence,” NPR (Aug. 1 2023), https://www.npr.org/2023/08/01/1191242175/movie-extras-worry-theyll-be-replaced-by-artificial-intelligence.
[12] 47 U.S.C. § 230.
[13] Hemant K. Bhargava, The Creator Economy: Managing Ecosystem Supply, Revenue Sharing, and Platform Design, 68 MGMT. SCI. 5233 (2022).
[14] Id. at 5243
[15] Mobley v. Workday, Inc., 740 F. Supp. 3d 796, 810-11 (N.D. Cal. 2024).
[16] NewsGuard, OpenAI’s Sora: When Seeing Should Not Be Believing (Oct. 17, 2025), https://www.newsguardtech.com/special-reports/openai-sora-seeing-should-not-be-believing/
[17] Colleen McClain et al., How the U.S. Public and AI Experts View Artificial Intelligence, Pew Research Center (Apr. 3 2025), https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence.
[18] Tex. Elec. Code § 255.004.
[19] United States v. Alvarez, 567 U.S. 709 (2012)
[20] D. Caled & M.J. Silva, Digital Media and Misinformation: An Outlook on Multidisciplinary Strategies Against Manipulation, 5 J. Comput. Soc. Sci. 123 (2022)
[21] FTC, supra note 7.
[22] See Fla. Stat. § 540.08; Cal. Civ. Code § 3344.
