Art Case Update - November 2024

Image Source: Tech Crunch and Tesla. 

Guest Work Agency is your go-to source for art-related legal cases and law reform in Australia, as well as select cases internationally.

In this story, our Director and Founder, Alana Kushnir and Paralegal Lily Kruglova cover the latest law reform initiatives on intellectual property and artificial intelligence in Australia, as well as international cases involving AI-related copyright infringement claims and the applicability of cultural heritage laws outside of their jurisdiction of origin. 

Australia

IP Australia releases feedback for design protections

On 6 August 2024, IP Australia, the Australian Government agency responsible for administering intellectual property (IP) law in Australia, released feedback for enhancing Australian design protections. Their assessment focused on virtual designs as well as partial and incremental designs.

The proposed changes to the Designs Act 2003 (Cth) involve refining the definition of virtual products. This includes user interfaces and product elements only visible when the screen turns on, ensuring that the owner of a virtual design right can elect to bring infringement proceedings against the seller of a virtual product. It also proposes to take steps to ensure that copyright protection is not lost when virtual designs are registered or commercialised.

The protection of design innovation for parts of whole products, if implemented, provides a guarantee for separate parts to be registered as a single design. Additionally, IP Australia plans to allow designers to protect their designs as products are developed, throughout the design lifecycle.

The Government is currently considering legislation to implement these proposals. Read about IP Australia’s consultation outcomes here.

Tasmanian Supreme Court quashes TasCAT Ladies Lounge Decision

In a recent update to the ongoing Ladies Lounge discrimination matter (previously covered in our August Art Case Update), the Tasmanian Supreme Court has overruled Tasmania’s Administrative and Civil Tribunal’s decision, which would have forcibly allowed men to enter the MONA’s Ladies Lounge exhibit.

Proceedings were commenced under s 26 of Tasmania’s Anti-Discrimination Act 1998 (Tas) (the Act). The law allows for discrimination to be claimed under a program that promotes equal opportunity for a group of people who are disadvantaged.

Acting Justice Marshall acknowledged that discrimination against women was a present condition in society, and that women should be able to access an “exclusive space” that enables a “flipped universe”. His Honour’s reasoning was based on whether the Ladies Lounge created equal opportunity, as required under s 26. In this, his Honour wrote:

On the evidence, the unequivocal answer is 'yes' because the Ladies' Lounge was designed to provide women with an exclusive space where they receive positive advantage as distinct from the general societal disadvantage they experience.

The matter will now return to TasCAT, with consideration given to Acting Justice Marshall’s reasoning. Read the full judgement here.

 

OAIC published new guidance on privacy and using, developing and training AI models

The Office of the Australian Information Commissioner (OAIC) has released two pieces of guidance based on privacy considerations for developing and training generative AI models, as well as privacy in relation to the use of commercially available AI products.

Generative AI is referred to by the OAIC as “an AI model with the capability of learning to generate content such as images, text, and other media with similar properties to its training data,” as well as systems built on such models. The Privacy Act 1988 and the Australian Privacy Principles (APPs) apply to all uses of AI that involve personal information, including when information is used to train, test, or operate an AI system.

The main takeaway when using commercially available AI tools is that privacy law obligations apply to any personal information entered into an AI system, as well as to the output data generated by AI. Where AI systems are used to generate or infer personal information, including images, it is considered a collection of personal information and must comply with APP 3 (which outlines when an entity can solicit and collect solicited personal information). For best practice, the OAIC recommends that organisations avoid uploading personal information, especially where it is considered sensitive, into publicly available generative AI tools due to the significant and complex privacy risks involved.

For developers, the OAIC advises that reasonable steps must be taken to ensure accuracy in generative AI models. Developers should consider whether the data they intend to use or collect (including publicly available data) contains personal information and must comply with their privacy obligations. When developers use personal information that they already hold for training an AI model - especially if this was not the primary purpose of collection - they need to carefully assess their privacy obligations. If there is no consent for a secondary, AI-related purpose, developers must establish that the individual would reasonably expect this secondary use. The individual’s expectations at the time when information was collected should also be taking into consideration, and that it is related (or directly related, for sensitive information) to the primary purpose (or that another exception applies).

Where a developer cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to a primary purpose, they should seek consent for that use and/or offer individuals a meaningful and informed opportunity to opt out, in order to avoid regulatory risk.

Access OAIC’s privacy guidance here and here.

 

The Federal Government has signalled its commitment to advancing major consumer law reforms in relation to AI

In October, the Australian Government’s Treasury department released a discussion paper on ‘AI and the Australian Consumer Law’, seeking feedback on expanding the Australian Consumer Law (ACL) to include AI-specific consumer protection.  There are currently no mandatory safety standards for AI in the ACL. Treasury is reviewing whether existing measures, including the Voluntary AI Safety Standard, ensure the safe use of AI. It is also considering mandatory safeguards for AI-related consumer products and services.

In addition, Treasury is exploring adapting remedies for defective AI products, such as a ‘presumption of causality’, which under certain conditions would shift the burden to manufacturers to prove no causal link to consumer loss or damage. Click here to read the discussion paper.

 

WA Museum criticised for acquiring Perspex cover vandalised by climate protesters

The Western Australian Museum Boola Bardip has acquired a Perspex cover that was vandalised by a climate protestor in early 2023. Two protestors stencilled the logo of global energy company Woodside onto Frederick McCubbin’s 1889 painting Down on His Luck while on display at the Art Gallery of Western Australia (AGWA). The act was motivated by concerns over 50,000-year-old Indigenous rock art on the Burrup Peninsula, which protestors believed was at risk from Woodside’s industrial activities on the Western Australian coast.

The protestor who spray-painted the cover was fined $2,500 and ordered to pay AGWA $4,800 after being convicted of criminal damage. The court made this decision based on the s 445 of the WA Criminal Code, which states that “a person who unlawfully damages the property of another person without that other person's consent is guilty of an offence and is liable to imprisonment for 2 years and a fine of $24,000.”

Since the Museum has acquired the Perspex cover, it has faced significant criticism for what has been described as ‘glorifying vandalism’.

  

International

OpenSea notified of impending lawsuit from the SEC

The U.S. Securities and Exchange Commission (SEC) has indicated that it is considering bringing a lawsuit against OpenSea - the world’s largest NFT marketplace, with the highest volume of traders. The notice alleges that OpenSea engaged in misleading and deceptive conduct, resulting in unjust enrichment through the collection of fees on NFT transactions. The SEC claims that OpenSea misrepresented to investors that NFTs traded on the platform were registered securities.

The lawsuit further asserts that NFTs fall under the definition of securities as investment contracts. Previously, the SEC made similar allegations against other companies, suggesting that NFTs are investment contracts and thus constitute securities under the Securities Act of 1933 (US). 

In a statement on their website by the co-founder of OpenSea, Devin Finzer, OpenSea emphasised that digital art should not be regulated in the same way as collateralised debt obligations, as doing so would misinterpret the law and could pose risks to artists’ livelihoods, their work, and “and stifle innovation across the many promising use cases for NFTs”.

The SEC has issued OpenSea with a notice indicating its intent to sue. An actual lawsuit is the anticipated next step by the Securities and Exchange Commission. 

Read the statement by the co-founder of OpenSea Devin Finzer, addressing the potential lawsuit here.

 

Bollywood singer wins case in first AI voice-cloning infringement decision in India

The Bombay High Court has published a landmark decision in India concerning AI-driven voice cloning. A well-known Bollywood singer Arijit Singh brought an action against Codible Ventures LLP and Others (Defendants) for violating his personality rights to create audio and visual content inter-alia mimicking or reproducing attributes of Plaintiff’s personality.’  

The case highlighted multiple instances of infringement, including AI platforms replicating Singh’s voice and image, a restaurant in Bengaluru hosting an event using his name and image without consent, and even unknown entities registering domain names with his name, one of which redirected to a third-party website. With over 30 defendants, these actions were found to be an infringement on Singh’s moral and personality rights, as well as his public rights, under Section 38-B of the Indian Copyright Act, 1957.

The Bombay High Court ruled that Singh’s personality attributes, including Singh’s name, voice, photograph and likeness were protectable elements and that their unauthorised use constitutes an illegal act. Using AI tools to recreate his voice and likeness - apart from violating his exclusive right to commercially exploit his personality - could potentially jeopardise his career if used for defamatory purposes, as well as cause severe economic harm to a person’s career.

Judge R.I. Chagla further added that ‘‘in the context of freedom and speech and expression, I agree that even though such freedom allows for critique and commentary, it does not grant the license to exploit a celebrity's persona for commercial gain.”

The Court granted an interim injunction prohibiting any use of Singh's identity across media, including digital and metaverse platforms, without explicit consent. This ruling is expected to be a significant precedent for protecting personality rights in India. Read the full judgment here.

 

German Court Rejects Application of Italian Cultural Heritage Code to Copyright Outside Italy

In November 2019, a legal dispute began between the Italian Ministry of Culture, the Galleria dell’Accademia di Venezia, and Ravensburger, a German puzzle manufacturer. The issue arose from Ravensburger's commercial use of Leonardo da Vinci’s Vitruvian Man without proper authorization. The case raised questions about the applicability of Italian Cultural Heritage Law beyond Italy’s borders and the relationship between Italian law and European Union (EU) law.

After the case was heard in Italian courts, which ruled in favour of the Ministry of Culture, it was brought before a German court in Stuttgart. The Italian side argued that Ravensburger’s use of the artwork violated Articles 107–109 of the Italian Cultural Heritage Code, which prohibits the commercial exploitation of significant cultural works without prior authorization and licensing from the relevant authorities. The Ministry further claimed that the Code applied to sales outside Italy and that German courts did not have jurisdiction over the matter. 

In its March 2024 ruling, the German court confirmed its jurisdiction and determined that enforcing the Italian Cultural Heritage Code outside Italy would breach the international legal principle of territoriality. The Court dismissed the application of the Italian Cultural Heritage Code to acts occurring within Germany, stating that “the opposite view violates the sovereignty of the individual states and must therefore be rejected”.

Regarding the copyright status of the Vitruvian Man under EU law, the Court noted the potential conflict between the EU Directive 2006/116/EC, which states that copyright lasts for 70 years after the author's death and the Italian Cultural Heritage Code. However, the Court chose not to make a final determination on whether the Italian Code contradicted the EU rule, acknowledging that “the question therefore remains open”. Read here, for the German decision and here, for the Italian decision.

 

Tesla is facing a lawsuit for AI-generated images with a 'Blade Runner 2049' aesthetic

Alcon Entertainment - an American film and television production company - has filed a complaint against Tesla for using AI-generated images with a ‘Blade Runner 2049’ aesthetic in a recent Robotaxi presentation. According to the complaint, Tesla fed images from the 2017 film into an AI generator to create visuals resembling its style after Alcon Entertainment refused permission for their use. The document filed as part of the initial complaint highlights one image of a silhouetted figure walking from a vehicle toward a futuristic city skyline, along with other visuals that appear similar to scenes from the film.

Under the Copyright Act of 1976 (US), a work infringes copyright only if it is identical to the original design or so substantially similar that it provides a prima facie case of copying. In other words, an image is considered a derivative work of copyrighted material only if it reproduces a protected element of the original work, rather than merely imitating its style. Accordingly, the fact that AI was used to create Tesla’s images may not affect the determination of copyright infringement, as the authorship of the image is not relevant to establishing infringement. However, the broader issue of whether using copyrighted images to train AI systems constitutes infringement remains the subject of ongoing legal debate in the US and other jurisdictions.

Read more here.

 

Korean art market is expected to benefit from new regulations, promoting the domestic art market

The Korean government has introduced several new regulations, the first of their kind, specifically designed for the art market. These new regulations are expected to benefit the local art market, with full implementation planned for 2027. The measures include an Art Service Industry Reporting System, requiring galleries, art fairs, auction houses and consultancies to file annual activity and transaction reports starting from 2026 to increase transparency. The new regulations will introduce an Artist’s Resale Right, which will entitle artists to a percentage of royalties when their work is resold, in an attempt to ensure that creators benefit financially from their artwork’s appreciation over time. Read more here.

 

German collecting society GEMA is suing AI developer for copyright infringement

GEMA, Germany’s collective rights management organisation, has filed the first lawsuit in Europe against a provider of generative AI systems for unlicensed use of copyrighted music. The case targets OpenAI, alleging that the company reproduced song lyrics by German authors without obtaining licences or compensating creators for the use of their works.

The lawsuit claims that OpenAI used GEMA’s repertoire, which includes works from around 95,000 members, to train its AI models without permission. GEMA aims to prove that ChatGPT’s training data unlawfully includes protected song lyrics. Read the full press release here.

Previous
Previous

Meet our Guest with Kelly Sims

Next
Next

Alana Kushnir wins Thought Leader Of The Year award At The Women In Law Awards