In 2023, AI technology took the photography world by storm. Deep fakes fooled us all and realistic AI-generated images were entered into photography competitions simply to test if competitions were prepared for AI imagery. As Boris Eldagsen discovered, they were not.
Is there any way to stop artificial intelligence from changing the world of photography and filmmaking? Probably not. However, photographers are taking a stand (and rightly so) through various lawsuits against AI companies, in an attempt to stop their originality from being stolen. But is there anything else that can be done? That’s where photography brands are attempting to take measures into their own hands to protect their users’ images.
Content Authenticity
It was in 2022 that Leica took steps to protect its users’ authenticity by introducing in-camera digital signatures. Partnering with the Content Authenticity Initiative (CAI), Leica built content credential software into its M11-P that documents the person who took the image, standard metadata (the time, date and location it was taken) as well as whether any post-production edits have been made. The image is then signed using a C2PA-confirmed algorithm. You can watch Leica ambassador Nick Rains explain exactly what this technology is here.
Leica says “Cameras equipped with CAI-compliant technology create transparency, building trust. The technology makes it possible for the first time to check the origin and history of a photograph – and thus prove authenticity.”
But at the price point of the M11-P, this isn’t something that the average consumer will have access to. But are changes on the horizon?
Following suit, Nikon, Sony and Canon are seemingly developing their own authentication technology to battle AI images/image theft or to have copyright over your photos. Sony, for example, has developed in-camera photo forgery detection technology for corporate business users. And, reports tell us that Nikon and Canon may integrate similar technology in upcoming releases. While aimed mainly at the professional market such as photojournalists, where authenticity is paramount, this is a step in the right direction.
Is it enough?
AI is a pesky one though and these developments in camera technology currently only stretch as far as a few top-of-the-range cameras. What about the individuals using all the other cameras or even smartphones? The CAI’s verification portal is designed to identify nefarious image tampering or AI image generation by providing information to authenticate an image but the technology can’t verify everything (yet).
The likes of Meta who, with the number of elections coming up, predict the use of AI-created images will increase and are going to attempt to label all images on their platforms created by AI. This is why content authentication is predominantly aimed at professional photojournalists whose images need to be reliable in times of political turmoil.
But this will not extend to the average user or even professional artists. Their work currently is up for grabs from an AI’s point of view.
The AI debate
Wex writer Jon Stapley recently spoke to internationally acclaimed photographer Tim Flach who’s work has seemingly been at the forefront of this discussion about AI and copyright. In that interview, Tim discussed his experience of being ‘scraped’ by generative AI, as well as what the future might hold for these technologies. In this context, could content authentication — a C2PA-confirmed algorithmic digital signature — have played any part in the proceedings? Presumably, an AI could be trained by a responsible human (or organisation) to recognise digital copyright when given a specific prompt, or at least put two and two together. A naive thought. After all, the machine isn’t at fault, is it? The prompt is given to the AI by a human.
It is safe to say that a legal framework needs to be established. How and when we get to that point is still very much up for debate. With the leaps in AI made in 2023, 2024 will surely see a steeper curve of exponential growth and with that, we will likely see significant developments in this topic.
Will digital signatures help save authenticity? Can generative AI, or the humans that built the models, be tempered? We’ll have to wait and see.
FAQs
Can images be copyrighted?
Yes, images can be copyrighted. In short, like many other countries, the UK recognises copyright protection for original works of authorship, including photographs, illustrations, paintings, and other visual arts. Copyright law in the UK grants the creator of an image or any other original work certain exclusive rights, such as the right to reproduce the work, distribute copies, display or perform the work, and create derivative works based on it.
How do I know if a picture is copyrighted?
Determining whether a picture is copyrighted isn’t always as simple as seeing the copyright symbol (©). You may need to investigate further. The absence of a copyright notice doesn’t necessarily mean the image is not copyrighted. You should consider the image source, check for licensing information, perform a reverse image search or, if you can, contact the creator of the image.
Can I use copyrighted images?
Using copyrighted images without permission from the copyright holder generally constitutes copyright infringement, which is illegal in most countries. However, there are some exceptions and limitations to copyright law that may allow you to use copyrighted images under certain circumstances such as using royalty-free image websites like Unsplash.
Can you legally use AI-generated images?
The legalities of using AI-generated images depend on factors such as copyright ownership, terms of use, licensing agreements, and ethical considerations. Because the technology is still relatively new, this is a murky area and it is best to try to understand and comply with applicable laws and terms of use when using AI-generated images to avoid potential legal issues.
Who owns the copyright of AI-generated images?
This is a complex matter. It depends on various factors such as jurisdiction and human intervention. However, it is advisable to consult with legal experts familiar with copyright law and AI technology to determine copyright ownership of AI-generated images in specific situations.
Are AI-generated image art theft?
This is at the heart of the current debate in the creative industry. Whether AI-generated images constitute art theft depends on a range of legal, ethical, and cultural considerations. If you are an original artist taking photos, filming, drawing etc. then you may lean towards the it-is-theft argument. The answer to this question is likely to become more complex as time goes on but depending on who you are and your intention, there isn’t a clear answer.
About the Author
Leo White has been a member of the Wex Photo Video team since 2018, working in a variety of roles ranging from the contact centre to the product setup team. With both a photography BA and MA, Leo has a wealth of knowledge he's ready to share.