ARTICLE AD BOX
Artificial intelligence may be rewriting the rules of creativity, but Adobe’s legal compass still points toward one constant — trust. At a time when “AI-generated” and “authentic” are increasingly difficult to tell apart, Adobe has doubled down on its Content Credentials initiative, embedding digital provenance as a signature of transparency. For Louise Pentland, Chief Legal Officer and Executive Vice President for Legal and Government Relations at Adobe, this isn’t a defensive move — it’s the foundation of a more trustworthy creative economy.

In an interview with HT, Pentland speaks about Adobe’s long-running push for transparency through its Content Credentials standard, the need for common guardrails in AI governance, and why she believes the industry must avoid creating “haves and have-nots” in the next technology revolution. She discusses how Adobe’s evolving ecosystem, from Firefly to Project Moonlight, is designed to keep creators in control, even as AI reshapes creative workflows. From copyright harmonisation to the role of provenance standards, her perspective underscores a central belief — the future of AI must still have a human author at its core. Edited excerpts.
Adobe has championed the “Content Credentials” initiative for AI transparency. How far can such provenance tools go in practice, and how do you see that initiative developing in the years to come?
I think we’re on a journey, and we’ve obviously been on that journey perhaps longer than others who are now in this space. And so we’re very actively encouraging industry and governments to think about this as an issue, to ensure that the success of AI really does rely on trust and authenticity. We all know that deep fakes are very real in AI, and so, it goes quite fundamentally to the core of people feeling confident with AI tools is if you can have the ability to recognise what the provenance and the history of that creation is. That’s important. Some of our tech peers are on that journey with us. Others are not quite there yet, and I think it will be important to get everyone focussed on this.
As the Chief Legal Officer, you find yourself at this rare crossroads of ethics, regulation, and policy. What would you define as essential principles of this evolving social contract between humans and intelligent systems—one that genuinely protects authorship, while enabling innovation?
We think so. It’s always going to be about finding the right balance. You want governments to help define frameworks, but you don’t want it to be overregulated so that it favours only a few or stifles innovation. History has shown that, with all the technological revolution, even with the onset of the internet, people were worried about information on the internet and who would be liable for it. Governments found solutions for that. I feel confident that that will happen here. You saw the same in the music industry when it went from analogue to digital. Everyone was running around with their hair on fire. What’s gonna happen? And the industry found a solution for that. I feel that the solution for AI is to be found.
I think we’re still in the process of working out what that looks like, so that we don’t create the haves and have-nots. You want the industry and technology to thrive, but you also want the creators to thrive. What is AI? AI is data, an LLM — it’s just a lot of data and how that data is leveraged. If there’s no new data, then that AI technology isn’t very helpful. So you’ve got to have people creating new content. That means you have to protect the ecosystem of creators. And that goes into this entire sort of environment where we have to make sure every constituent has the right protections for this ecosystem to stay healthy.
With AI creations now defining social media feeds and indeed content creation in general, do you believe the definition of creativity itself has changed? When machines can “create,” do we need to rethink the very definition of ownership — and who counts as a creator?
Our view is that AI is a tool in a toolbox, just like an artist has a paint kit. AI can be leveraged to create, but it will not be the creation. The creation is always going to be human-driven. What we see is, and it was part of our keynote as well, in every one of those demos, the creation was original, and the person who created it was doing the demo.
It was not AI doing it. AI is an enhancement, a way to do things quicker or more efficiently, or stimulate new ideas or thinking. Yet ultimately, the creator stays in control of that creation. And we fundamentally believe that is the model and the complement that AI can be. That’s how we have aligned our entire company around that model, and I think that is going to allow so many amazing creations without losing that human connection.
As Adobe builds AI tools and widens the scope of Firefly as well as Project Moonlight, how important are guardrails for Adobe, and how do you balance experimentation with protection?
IP protection is obviously fundamental for what Adobe creates. What we heard from customers was what tools they use, each model that you have with all of them optimised for something different, and so if you’re a creator, you might want this model or that model. What we heard from our creative community was that they wanted more choice. And so what we enable is that seamless experience to allow them to make those very intentional choices. You can stay in the Firefly environment, or if you want to do more with another tool, you can bring it into the environment.
Giving the consumer a choice is always going to be the best option. And then making sure that they’re well informed on what those choices are is also part of that decision-making. But ultimately, we’re very focussed on the industry standards around content credentials, and making sure that the industry as a whole is leaning into this. We’re on a journey, certainly, and tech companies are very active there. I think social media companies still have a way to go. And ultimately, we hope that they get there.
Partner AI models within Firefly, and that number is the highest it’s ever been, are very easily available for users. How do you see this evolving in terms of not just the ability of those models but also adding new models to Adobe’s platforms?
As Shantanu Narayen also mentioned, not all models are going to survive. And it’s going to be, and as always with new technology, you’re going to have these ups and downs. We want to be the ecosystem provider. So we will work with the models that our creators want, and which we know will bring value to our customers. We are opening ourselves to being available to whatever models are out there.
But we know that’s going to change. And I think that’s the opportunity for a lot of these different model providers, and they want to work with us because they see that the software layer that we have allows much more precision and opportunity for creators to leverage for different things. The different models will change over time; we all know that to be the case, and so making sure that we’re working and partnering with all of those providers, so we can provide the best service to our creators. That is really the goal.
Regulators globally seem to have different approaches towards AI copyright. Do you believe we can ever achieve a globally consistent standard for creator rights in the AI era, or will regional fragmentation become the norm?
We probably would see world peace before we would see full harmony on global regulation! I can’t even think of a time when we’ve had global harmonisation on policy. I think this is going to be very country-specific. We know that governments talk to each other, but they also have their country constituents to think of. And I think that’s not new in technology. We’ve seen that with the cloud through to data centres. AI is not going to be that different in that sense. We’re just going to have to hope for harmony on some level. But we’ll have to work with each country and deal with their requirements.
6 days ago
7



English (US) ·