Episode 3
The Critical Role of Content Authenticity in DAM
Host Chris Lacinak is joined by Bertram Lyons, Co-Founder and CEO of Medex Forensics, to unravel the complexities of content authenticity. This is a topic that touches every organization, from Fortune 500 to museums, and government to human rights. This episode offers listeners a look into this vital topic, equipped with Lyons' expert insights and involvement with the latest developments in the field.
We discuss why content authenticity matters, why the problem has become an urgent one to address, what's so tricky about it, and what you can do about it today.
We touch on the Content Authenticity Initiative (CAI) and the C2PA schema, initiatives aimed at embedding provenance data into digital content to enable verifiable authenticity.
It's an essential listen that will help anyone who owns, manages, or creates digital assets navigate the complexities of content authenticity. We delve into the significance of these efforts for the Digital Asset Management (DAM) ecosystem and the broader implications for protecting and verifying digital assets across industries.
Guest Name(s):
Bertram Lyons
Guest Bio(s):
Bertram is the Co-Founder and CEO of the forensics technology company, Medex Forensics and he is a co-author of a patent behind one of the core technologies in the Medex platform. Before founding Medex, Bert served as a Managing Director at digital asset management consulting firm, AVP and as a digital archivist for the American Folklife Center at the Library of Congress. Bert's clients include the FBI, Smithsonian, Library of Congress, HBO, Paramount Pictures, University of Kentucky, Indiana University, and Facebook.
Bert speaks and writes as an expert on the topic of content authenticity frequently and is engaged in the Content Authenticity Initiative and the C2PA working group. He is also an Associate Member of the American Academy of Forensic Sciences (AAFS) and an active member of the Scientific Working Group on Digital Evidence (SWGDE). He has received certification from the Academy of Certified Archivists and is a graduate of the Archives Leadership Institute. He holds an MA in museum studies with a focus in American studies and archival theory from the University of Kansas.
Connect with Bert at https://www.linkedin.com/in/bertramlyons/
Topics discussed in the episode:
- Introduction to the topic of content authenticity
- Bertram Lyons introduction and background
- The increased urgency to the content authenticity challenge today
- Considering "doctored" content and content generated by AI and other synthetic means
- The limits of technological solutions in identifying fakes and forgeries
- Organizations and initiatives addressing content authenticity
- Content Authenticity Initiative and C2PA
- Why content authenticity matters for digital asset management
- Current state of content authenticity within DAM
- Content authenticity within the context of human rights and journalism
Resources mentioned in this episode:
- Best of My Love - The Emotions
- The DAM Right Soundtrack
- Scientific Working Group on Digital Evidence
- American Academy of Forensic Sciences
- Lomax Digital Archive
- American Folklife Center at the Library of Congress
- Content Authenticity Initiative
- Coalition for Content Provenance and Authenticity
- Medex Forensics
Episode transcript:
🆓 Download the DAM Strategy Canvas & other free resources from the best DAM consultants in the business at https://weareavp.com/free-resources
⭐ Please rate, like, follow, and subscribe on your podcast platform of choice. See all the places we are at Listen to DAM Right .
🔗 Follow me on LinkedIn at https://linkedin.com/in/clacinak
License info:
Music from Uppbeat (free for Creators!):
https://uppbeat.io/t/hey-pluto/the-gentleman
License code: 3ANTPXRSIL9PFQJ3
Transcript
Hello, welcome to DAM Right, Winning at Digital Asset Management. I'm your host, Chris Lacinak, CEO of Digital Asset Management Consulting Firm, AVP. In the summer of 2022, the FBI seized more than 25 paintings from the Orlando Museum of Art based on a complex, still unclear scheme to legitimize these supposedly lost and then found paintings as the works of Basquiat. In 1903, the Protocols of the Elders of Zion was published, detailing a series of meetings exposing the Jewish conspiracy to dominate the world. It was used in Nazi Germany and by anti-Semites worldwide to this day as a factual basis to promote and rationalize anti-Semitism. Of the many problematic things regarding this text, one of the biggest is that it was a complete work of fiction. In 2005, an investigation conducted by the UK National Archives, identified a number of forged documents interspersed with authentic documents posing as papers created by members of the British government armed services, tying them to leading Nazi figures. No one was convicted, but three books by the author, Martin Allen, cited these forged documents and documentation shows that he had access to these specific documents. In 1844, an organized gang was convicted in London for creating forged wills and registering fictitious deaths of bank account holders that the gang had identified as having dormant accounts so that they could collect the remaining funds. As this sampling of incidents demonstrates, content authenticity is not a new problem. It is, however, a growing problem. The proliferation of tools for creating and altering digital content has amplified the authenticity dilemma to unprecedented levels. In parallel, we are seeing the rapid growth and deployment of tool sets for detecting fake and forged content. As is highlighted in this conversation, the line between real and fabricated lies in the intent and context of its creation and presentation. This conundrum signals that technology alone cannot bear the weight of discerning truth from fiction. It can merely offer data points on a file's provenance and anomalies. As the hyperspeed game of cat and mouse continues on into the foreseeable future, it's also clear from this conversation that addressing this challenge in any truly effective way requires an integrated and interoperable ecosystem that consists of both people and technology. The stakes are high touching every industry and corner of society. The ability to assert and verify the authenticity of digital content is on the horizon as a cornerstone of digital asset management, as well as being a social imperative. Amidst this complex landscape of authenticity, integrity, and technological chase, I am excited to welcome a vanguard in the field, Bertram Lyons, to our discussion. As the Co-Founder and CEO of Medex Forensics, an Illuminary in content authenticity, Bert's insights are extraordinarily valuable. His journey from a Digital Archivist at the American Folklife Center at the Library of Congress to spearheading innovations at Medex Forensics underscores his deep engagement with the evolving challenges of digital veracity. Bert's involvement in the Content Authenticity Initiative and the C2PA Working Group, coupled with his active roles in the American Academy of Forensic Sciences and the Scientific Working Group on Digital Evidence, highlight his commitment to shaping a future where digital authenticity is not just pursued, but attained. Join us as we explore the intricate world of content authenticity, guided by one of its esteemed experts.
Bertram Lyons, Welcome to DAM Right. I'm so excited to have you here today. Uh, um, I'm particularly excited at this moment in time, because I feel like the expertise and experience you bring is going to be a breath of fresh air, um, that gives us a deeper dive into the nuance and details of a topic, content authenticity, which I think is most frequently, uh, experienced as headlines around, uh, kind of bombastic AI sorts of things, and I think that, uh, you'll, you'll bring a lot of clarity to the conversation. So thank you so much for being willing to talk with us today. I appreciate it.
Bertram Lyons:Thanks Chris.
Chris Lacinak:I'd like to start off with just talking a little bit about your background. I think it's fair to say that you didn't come to forensics and content authenticity with the most typical background. I'd love to hear a bit about how you arrived here and how the journey, uh, kind of informed what your approach is today.
Bertram Lyons:To give you a sense of, you know, where I think I am today is working in the world of, uh, authenticating digital information, uh, specifically video images. Um, and how I got there, you know, I spent 20 years plus working in the archives industry. That was really what, what I spent my time doing up until a few years ago. Um, I started at, you know, various different kinds of archives, um, one. exciting, um, uh, place that I worked for, for a variety of years. When I first started out, it was a place called the Alan Lomax Archive. And that was a really cool audiovisual archive. You know, it had tons of formats from, from the start of recording technology up until the time that, that particular individual, Alan Lomax, stopped recording, which spanned from like 1920s through the . 1990s So, you know, really a lot of cool recording technology. And I did a lot of A. D. analog to digital conversion at that time. Um, and that led me down a path of really ultimately working in the digital side of, of archives and ending up at the Library of Congress in D. C. where I, where, you know, my job was specifically a Digital Archivist, and my job there was to learn and understand how historical evidence, um, how it existed in digital form. Um, to document that and to be able to establish strategies and policies for keeping that digital information alive as, as long as possible, both, both the bits on one side and the, um, and the information itself on, on the other side and ensuring that we can, we can reverse engineer information as needed as, as time goes on, uh, so we don't lose the information in our, in our historical collections. So, uh, it's been many years with that and then, you know, jumped out, jumped ship from, from LC and started working with you, uh, at AVP and, uh, you know, for a number of years. And that was an exciting ride where we applied a lot of that knowledge, you know, I was able to apply a lot of my experience to our, our customers and clients and colleagues there. Um, but ultimately the, the thing that brought us, brought me into the digital evidence world where I work now was through a relationship that we developed with the FBI and their Forensic Audio Video Image Analysis Unit, um, in Quantico where, you know, we were tasked to increase capabilities, you know, help that team there who, who were challenged with establishing, , authenticity of evidence for court and help them to increase their ability to do that, uh, both manually using their knowledge about digital file formats, but also ultimately in an automated way because Unfortunately, and fortunately, digital video and image, um, and audio are, just everywhere, you know, there's just so much video, uh, image and audio data around that it becomes the core of almost every investigation that's happening. Um, any question about what happened in the past we turn to multimedia
Chris Lacinak:I think back to you sitting at the American Folklife Center and Library of Congress. Did you ever have any inkling that one day you'd be working in the forensics field? Was that something you were interested in at the time or was it a surprise that kind of to you that you ended up where you did?
Bertram Lyons: on my mind in that when I, in: Chris Lacinak:Transitioning a bit now away from your personal experience, I, I guess in preparing for this conversation, it dawned on me that content authenticity is not a new problem, right? That there's been forgeries and archives and in museums and in law enforcement situations and legal situations for, for centuries, but but it does seem very new in its characteristics. And I wonder if you could talk a bit about like what's happened in the past decade that makes this a much more urgent problem now, uh, that it deserves the attention that it's getting.
Bertram Lyons:I think, you know, you say the past decade, a few things that I would put on the table there. One would be just entirely. the boom, which is more than a decade old, but the boom in social media and that like, and that the how fast I can put information out into the world and how quickly you will receive it, right? Wherever you are. So it's just the, the ability for information to spread And information being whether it's, whether it's a, you know, media like image or audio or video or whether it's, you know What I'm saying in text. Those are different things too, right? So just to scope it for this conversation, just thinking about the creative or documentary sharing of image, video, and audio, right? So it's a little bit different probably when we talk about misinformation on the tech side. But when we talk about content authenticity with media things, you know, it can go out so quickly, so easily, from so many people. That's a, you know, that's a huge shift from years past where we're worried about the authenticity of a photograph in a, in a museum, right? That's a, the reach and the, uh, the immediacy of that is, is significantly different, um, in today's world. And then on, uh, I was, I would add to that, now the ease with which, and this is more of the last decade, with which the, uh, individuals have access to creatively manipulate or creatively generate, you know, new media, That can be confused with, from create, from the creative side to the documentary side. Can be confused with actually documentary evidence. So, you know, the content's the same whether I create a video of, you know, of myself, um, you know, climbing a tree or whatever. Um, that's content and I could create a creative version of that that may not have ever happened. And that's for fun and that's great. We love creativity and we like to see creative imagery and video and audio. Or I could create something that's trying to be documentary. You know, Bert climbed this tree and he fell out of it. Um, and that really happened. I think the challenge is that we're starting, the world started, the world of creating digital content is blending such that you wouldn't be able to tell whether I was doing that for, from a creative perspective or from a documentary perspective. And then, you know, and I have the ability to share it and claim one or the other, right? And so the, the, those who receive it now, out in the social media world and the regular media world, you know, have to make a decision. How do I interpret it?
Chris Lacinak:Yeah
Bertram Lyons:But I think the core challenge that we face off the authentication side is still one of intent by the individual who's, who's creating and sharing the content. The tools have always been around to do anything you really want to digital content, um, whether it's a human doing it or, or asking a machine to do it. In either scenario, what's problematic is the intent of the person or group of people creating that, and how they're going to use it.
Chris Lacinak:What do you think people misunderstand most about the topic of content authenticity? Is there something that you see repeatedly there?
Bertram Lyons:From the way the media addresses it generally, I think one of the biggest misinterpretations is that synthetic media is inherently bad in some way. that we have to detect it because it's inherently bad, right? You get this narrative, um, that is not true. You know, it's, it's a creation process, and it inherently is not a, uh, it doesn't have a bad or a good to it, right? It comes back to that question of intent. Synthetic media or generative AI that's creating synthetic media is really just allowing a new tool set for creating what you want to create. We've been looking at CGI movies for years and how much of that is ever real. Very little of it, but it's beautiful and we love it. It's entertaining. And it comes back to the intent. On the flip side, another really, I think, big misunderstanding in, in this is that, this really comes down to people's understanding of how files work and how they move through the ecosystems that they're, that they're stuck in. You know, files themselves don't live except for within these computing ecosystems. They move around, they get re-encoded, they, um, and as they follow the, that lifecycle, they get interacted with by, by all kinds of things. Um, like by encoders that are changing, uh, the resolution, for example, or encoders that are just changing the packaging. Um, those changes, which are invisible to the, to the average person, those changes are actually extremely detrimental to the ability to detect synthetic media, or anything that you want to detect about a, about a, you know, that content. As that content gets moved through, it's being normalized, it's being laundered, if you will, um, into something that's very basic. Um, and, and as that laundering happens, that particular content and that particular packaging of the file becomes in some ways useless from a forensic perspective. And I think the average person doesn't get that yet. That information is available to them. That, that if you want to detect if something's synthetic and it's sitting on your Facebook feed, well it's too late. Facebook had the chance on the way in, and they didn't do it, or they did do it. Um, and now we're stuck with like network analysis stuff. Who did, who posted that? Now we're going back to the person. Who posted that? Where were they? What was their behavior pattern? Can we trust them? Versus, you know, having any ability to apply any trust analysis unless it's a blatantly visual issue to that particular file.
Chris Lacinak:Can you give us some insights into what are some of the major organizations or initiatives that are out there that are focused on the issue of content authenticity? What's the landscape look like?
Bertram Lyons:From the content authenticity perspective. It's a lot, a lot of it's being led by, major technology companies who, who, who trade in content. So that could be from Adobe, who trades in content creation. Could to Google, who trades in content distribution and searching. Um, you know, and everybody in between. Microsoft, Sony, you know, organizations who are either creating content. Whose tools allow humans to create content and computers or, uh, organizations who really trade in the distribution of that content. Um, so there's, there's an organization that's composed of a lot of these groups called the Content Authenticity Initiative. Um, and there's, it's, that, that organization is really heavily led by Adobe. Um, but has a lot of other partners involved with it. And then it sort of has become an umbrella for, for, for, uh, I'd say an ecosystem based perspective on content authenticity that's really focused on, um, the ability to embed what they're calling content credentials, but ultimately to embed signals of some sort, whether it's actual text based cryptographic signatures, whether it's watermarking, other kinds of, there's other kinds of approaches, but ultimately to embed signatures, or embed signals in digital content. Such that as it moves through this ecosystem that I mentioned earlier, you know, from creation on the computer, to upload to a particular website, to display on the web, through a browser. It's really focused on like, can we, can we, can we map the lifecycle of, of a particular piece of content? Um, can we somehow attach signals to it such that as it works its way through, um, it can, those signals can be read, displayed, evaluated, and then ultimately a human can determine how much they trust that content.
Chris Lacinak:If I've got it right, I think the Content Authenticity Initiative are the folks that are creating what's commonly referred to as C2PA or the coalition for content provenance and authenticity. Is that right?
Bertram Lyons:That's right. Yeah, that's like the schema,
Chris Lacinak:Okay.
Bertram Lyons::technical schema.
Chris Lacinak:And in my reading of that schema, and you said this, but I'll just reiterate and try to kind of recap is that it looks to primarily identify who created something. It really focuses on this concept of kind of trusted entities. Um, and it does offer, um, as you said, provenance data that it will automatically and or systematically embed into the, uh, files that it's creating. And this starts at the creation process, goes through the post production and editing process through the publishing process. Is that a fair characterization? Is there anything that's kind of salient that I missed about, uh, how you think about or describe that, uh, schema?
Bertram Lyons:I think that's fair. I think the only thing I would change in the way you just presented it is that the C2PA is a schema and not software. So it will never embed anything and do any of the work for you. It will allow you to create software that can do what you just said. C2PA itself is purely like a set of instructions for how to do it. And then if you, or if you, uh, you know, want to implement that, you can. If Adobe wants to implement that, they actually already implemented it in Photoshop. If you create something and extract it, you will have C2PA data in it, um, in that file. So it's really creating a specification that can then be picked up by, um, anybody, any who generates software to read or write, uh, video or images or audio. Actually, it's really built to be pretty broad, you know. They define ways to package the C2PA data sets into PDFs, into PNGs, into WAVs, you know, generally, um, trying to provide support across a variety of format types.
Chris Lacinak:And the provenance data that's there, or the specification, uh, for, for embedding, uh, creating provenance information is optional, right? It, someone doesn't have to do it. Is that true?
Bertram Lyons:Let me come at it a different way.
Chris Lacinak:Okay
Bertram Lyons:It depends on what you use. If you use Adobe tools, it will not be optional for you. Right? If you use a, a, a tool to do your editing that's not working, that doesn't, hasn't implemented C2PA, it will be optional. It won't even be available to you. Um, that's why I talk about ecosystem. You know, the, the tools you're using have to adopt, implement this kind of, um, technology in order to ultimately have the files that you export contain that kind of data in them, right? So it's optional in that you choose how you're going to create your content, and you have the choice to buy into that ecosystem or actually to select yourself out of that ecosystem. This reminds me of the early days of kind of in just generally speaking embedded metadata, where before everyone had the ability to edit metadata in word documents and PDF documents and audio files and video files and all that stuff. It was a bit of a. black box that would hold some evidence. And there were cases where folks claimed that they did something on such and such a date, but the embedded metadata proved otherwise. Uh, today that feels naive because it's so readily accessible to everybody. So I kind of, in the same way that, um, there was a time and place where not everybody could access and view and, or write and edit. Uh, and embedded metadata in files, this sounds similar that, that the tool set and the ecosystem, as you say, has to support, um, that sort, that sort of, those sort of actions. Yeah, they'll have to be able, you'll have to support it, and I'll, just, just so, so, somebody listening doesn't get the wrong idea, C2PA spec is very much stronger than the concept of embedded metadata, and that, it's cryptographically signed. So, you know, up until C2PA existed, anybody could go into a file and change the metadata, and then just re save the file and no one would ever know. Potentially. Um, but what the, the goal of C2PA actually is to make embedded metadata stronger. Um, and it's to generate, um, these, this package of a manifest. It says, you know, inside of this file, there are going to be some assertions that were made by the tool sets that created the file and maybe the humans that were involved with the tool set that created the file, they're going to make some assertions about its history and then they're going to sign it with the cryptographic signature. They're going to sign everything that they said such that if anything changes, the signature will no longer be valid, right? So it's really a goal of trying to lock down inside the file the information that was stated about the file when it was created and to bind that to the, to the hashing of the content itself. So if I have a picture of me, that all the pixels that go into that picture of me get hashed to create a, you know, a single value, um, what we call a checksum. That checksum is then bound to the statements I make about that. I created this on Adobe Premiere, well actually, Adobe Photoshop would make a statement about what I did to create it, you know, it was created by Photoshop, it was, these edits were done, this is what created it, and that's an assertion, and then I might say, you know, Bert Lyons created it, that's the author, that's an assertion, those assertions are then bound to the checksum of the file, of the image itself, right, and locked in, and if that data sticks around in the file as it goes through, um, it's ecosystem, and someone picks it up at the end of the pathway, they can then check. Bert says he, he created this on this date, using a Photoshop. Photoshop said he did X, Y, and Z. Signature matches, nothing's been changed. Now I have a trust signal, and it's still going to be up to the human to say, do I trust that? Is C2PA strong? Is the cryptography and the trust framework strong enough, such that nobody could have, nobody really could have changed that?
Chris Lacinak:So this C2PA spec then brings kind of this trust. entity trust level, who created this thing, but it also then has this robust cryptographic, um, signed, uh, kind of provenance data that tells exactly what happened. And it sounds like it is editable, uh, it's deletable, it's, it's creatable, but it's within the ecosystem that it lives within and how it works, it sounds like that there are protection mechanisms that mitigate, um, intentional, uh, augmentation for, you know, malicious purposes or something that it, that it mitigates that risk.
Bertram Lyons:Yeah, I mean, think about it like this. Like it, it doesn't take away my ability to just go in and remove all the C2PA data from the file. I, I just did that with a file I created for, from Adobe, right? I needed to create a file of my colleague Brandon. I wanted to put a fun fake generative background behind him. And I, I created it and I put some fake background behind them and I exported it as a PNG and I looked in there because I know and I was out of curiosity and so I was like, oh look, here's the, here's the C2PA manifest for this particular file. I just removed it. Nothing stops me from doing that. Resaved the file and moved on. Now this file, um, so the way C2PA works, this file now longer, now no longer has C2PA data. It can go down. It can go across, uh, about its life like any other file. And if someone ever wanted to evaluate the authenticity, they're going to have to evaluate it from without that data in it. They're going to look at the metadata, they're going to look at where it was posted, where they accessed it, what was said about it, all of that. The same way that we do for everything that we, we, we interact with today. Um, if that C2PA data had stayed in the file, which I, I, I was just wanting to make sure that I, I'm always testing C2PA, you know, does it still, does the file still work if I removed this, et cetera. Um, but if it stayed in there, it likely would've been removed from LinkedIn when I posted it, for example, I posted it up on LinkedIn. Um, it would've, it would, it would've been removed anyway 'cause the file would've been re it reprocessed by LinkedIn. Uh, but if LinkedIn, LinkedIn was C2PA aware, which maybe one day it will be, it would say it would be, and if I left the C2PA data in it and I submitted it to. To, uh, LinkedIn, then LinkedIn would be able to say, oh, look, I see C2PA data. Let me validate it. So it would validate it, and then gimme a report that said, there's data in here and I validated the checksum, uh, the, or the, the, the signature from the, from C2PA. And now it could display that provenance data for me. It was created by Bert in Photoshop. Um, and it could, again, it all comes around to communicating back to the end user. About the, about the file. Um, now if I had done, tried to make, it doesn't, still doesn't stop me from making a malicious change. If I, instead of removing the C2PA data, I went in and tried to change something, that, what would happen? Like maybe I changed the, who created it from Bert to Chris. Um, when that hit, if LinkedIn was C2PA aware, when that hit LinkedIn, LinkedIn would say this has a manifest in it, but it's not valid. So it would alert me to something being different in the metadata. In the ctpa manifest then from when it was originally created doesn't keep me from doing it. But now I'm sending a signal to LinkedIn where they're going to be able to say there's something invalid about the manifest. That's kind of the behavioral patterns that happen. So again, it comes back to you. And I went through that example just to show you that still, no matter what we implement, the human has decisions to make on the creation side, on the sharing side and on the interpretation side.
Chris Lacinak:Right.
Bertram Lyons:Nothing's really even at this most advanced technological state, which I think C2PA is probably the strongest effort that's been put, put forward so far. You know, if I, if I want to be a bad actor, I'm going to, I can get around it. You know, I could just, well, I can opt out of it. That's where it comes down. So the ecosystem is what's really important about that approach is that the more systems that require it, then, and the less I have to opt out of it, the better. Right? So we're creating this tool for it to work. It's about, really about the technological community, buying in and locking it down such that you can't share a file on Facebook if you don't, if it doesn't have C2PA data in it. If LinkedIn said you can't share something here if it doesn't have C2PA data, then once I remove the data, I wouldn't be able to share it on LinkedIn.
Chris Lacinak:Right.
Bertram Lyons:That's what's missing so far.
Chris Lacinak:Thanks for listening to the DAM Right podcast. If you have ideas on topics you want to hear about people, you'd like to hear interviewed or events that you'd like to see covered, drop us a line at damright@weareavp.com and let us know. We would love your feedback. Speaking of feedback. Please give us a rating on your platform of choice. And while you're at it, make sure to follow or subscribe so you don't miss an episode. If you're listening to the audio version of this, you can find the video version on YouTube using at @DAMRightPodcast and Aviary at damright.aviaryplatform.com. You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak. And finally, go and find some really amazing and free DAM resources from the best DAM consultants in the business at weareavp.com/free-resources. You'll find things like our DAM Strategy Canvas, DAM Health Scorecard, and the "Get Your DAM Budget" slide deck template. Each resource has a free accompanying guide to help you put it to use. So go and get them now. Let's move on from C2PA. Um, that, that sounds like that covers this, some elements of content authenticity at the organizational level, at provenance documentation level, some signatures and cryptographic, um, protections. You're the CEO and Founder of a company that also does, uh, forensics work, uh, as you mentioned, Medex Forensics. Uh, could you tell us about what Medex Forensics does? What does that technology do and how does that fit into the ecosystem of tools that focus on content authenticity?
Bertram Lyons:The way we approach and, and the contributions that we try to make to the forensics field is from a file format forensics perspective. So if we know how video file formats work, we can accept a video file, we can parse that video file and extract all the data from it and all the different structures and internal sequencing, ultimately to describe the object as an object, as a piece of evidence, like you would if you were handling 3D evidence. Look at it from all the different angles, make sure we've evaluated its chemistry, like we really understand every single component that goes to make up this information object called a file. Um, and once we do that, we can then describe how it came to be in that state. How did it come to be as it is right now? If the question was, hey, is this thing an actual original thing from a camera? Was it filmed on a camera and has not been edited? Then we're going to evaluate it, and we're not going to say real or fake, true or false. We're going to say, based on the internal construction of this file, it is, it is consistent with what we would expect from an iPhone 13 camera original file, right? That's the, that's the kind of response that we would give back. And that goes back into the interpretation. So if the expectation was, was this an iPhone 13? We're going to give them a result that matches their expectation. If their expectation was this came from a Samsung Galaxy, and we say it's consistent with an iPhone 13, that's going to change their interpretation. They're going to have to ask more questions. Um, so that's what we do. We have built a, a, a methodology, uh, that can track and understand how encoders create video files. Uh, and we use that, the, that knowledge to automatically match the internal sequencing of a file to what we've seen in the past and introduce that data back. So that's, that's kind of where we play. Um, in that world. I'll, I'll point out just a couple of things. So we call that non-content authentication. Um, and you would also want to employ content based authentication. So maybe critical viewing, just watching it. That's the standard approach, right? The critical viewing approach. Or analytics on the, on pixels with quantification of, you know, uh, are there any cut and pastes? Are there any pixel values that jump in ways that they shouldn't jump? So there's a lot of algorithms that really focus on, on, uh, the quantification side of, of, uh, of the pixels in the image. People do analysis based purely on audio, right? Audio frequencies, looking for cuts and splices and things like that. So there's a lot of ways that people approach content authenticity, um, that ultimately all together if used together can create a pretty strong approach. I mean, it takes a lot of knowledge to learn the different techniques and to understand the pros and cons and how to interpret the data, and that's why there's not probably a single, uh, tool out there right now because you just have the domain knowledge required is, is quite large. So we're the kind of tool that we are. Just to tie in where we sit within the question of content credentials in C2PA is that we read, we would be a tool that would ultimately, if we were analyzing it, we would read the C2PA data in your file and say, oh, there's a C2PA manifest in that file, and we would validate it, and we would then report back, there's a valid C2PA data manifest, and here's what the manifest says, so we would also be someone who would play in that ecosystem on the, on the, you know, the side of analysis, not on creation. We don't create or, you know, get involved with creating C2PA, but we recognize, read and validate C2PA data in a file, for example. Um, we, we're looking at all the signals, uh, but that would be one signal that we might evaluate, uh, in an authentication exam.
Chris Lacinak:You said, uh, Medex won't tell you if something is real or fake, but just to kind of bring this all together, tying into C2PA, uh, let me say what I think my understanding is, how this might work is, and you correct me when, where I get it wrong. But it seems that C2PA may, for instance, say this thing was created on this camera. It was edited in this software on this date by this person, so on and so forth. Medex can say what created it and whether it's edited or not. Uh, so for instance, if something, if, if the C2PA data said, uh, this was created in, um, uh, an Adobe product, but Medex purported that it was created in Sora, let's just say just throwing anything out there, uh, that that it wouldn't tell you this is real or fake, but it would give you some data points that would help the human kind of interpret and understand what they were looking at and, and make some judgment calls about the veracity of that. Does that sound right?
Bertram Lyons:Yeah, that's right. And I'd say the human and or the, the, uh, the workflow algorithm that's taking data in and out that, you know, that, from a, think about more like moderation pipeline, you know. C2PA says X, Medex says Y. They conflict, flag it. Or, they don't conflict, they match. Send it through. You can think about it that way too, from like an automation perspective. Um, but also from a human perspective.
Chris Lacinak:For the listeners of this podcast, which are largely DAM practitioners and people leveraging digital asset management and their organizations, I'd love to bring that back up, you know, bring us back up to the level of, of why should a Walt Disney or a Library of Congress or National Geographic or Museum of Modern Art, why should organizations that are practicing digital asset management with collections of digital files, you know, we talked, we kind of delved into like legal things and social media things. But why should an organization that isn't, uh, involved in a legal dispute or, or, or, or some of the other things we've talked about, why should they care about this? And how does, how does content authenticity play into the digital asset management landscape? Can you help us get some insights into that?
Bertram Lyons:Yeah, that's a great question, that's near and dear to my heart. And we, we probably need hours to talk about all the reasons why, but let's try to tee up a couple and then you can help me get to it. You know, there's, I'll, I'm gonna, I'm gonna list, list a set and then we'll, we'll hit some of them. But so, you know, let's think about collection development, right? So just on the collection development side, we want to know what we have, what we're collecting, what's coming in. And we want to apply and we do this as much, as best we can as today, um, in that community with triage tools like, like um, I'll name one, Siegfried is a good example, built off of the UK's National Archives PRONOM database. It really focuses on identifying file formats. So, you know, to date, we want to know what file, like as, as, when we're doing collection development, we want to know what file formats are coming in. Um, but furthermore, actually when we're doing collection development, you know, I'm speaking of organizations like, like MoMA and Library of Congress, who are collecting organizations. We're going to get to National Geographic and, uh, Disney and et cetera shortly. You know, on that side, we need collection development tools to make us, make sure we know what we have, right? It goes back to your earlier fakes question. We don't want to let something in that's different than what we think it is. And authentication techniques are not present, uh, in those organizations today. It's a tool that purely metadata, metadata analysis is happening. Just extracting metadata, reviewing the metadata, uh, reviewing the file format based on, based on format, uh, these quote unquote signatures that the UK, Um, National Archives has, has produced and with, with the community over the years, which are great. You know, they're really good at quickly saying this is a doc, Word doc. This is a PDF. This is a, you know, you know, they identify the type of file. They don't authenticate the content in any way. So that's one side of it. Did, um, quality control on big digitization projects is another great way to do this. And start to incorporate this. And of course we kind of do that with metadata techniques still. We're looking for metadata. We don't look at file structure, for example, and those kinds of, uh, we don't know exactly what happened to the file. We know what's in the file, but we don't always know what happened to the file. Authentication techniques are focused on that. Um, so I think there's just ways that that could be added to the current pipelines in those communities. Um, then we think about the file, the content that we're now storing on the preservation side. We don't want to necessarily change the hash of files, right? When you're thinking about libraries and museums and archives. So there's, there's probably not a, not a play there to embed C2PA metadata, for example. At least not in the original. There's probably a play to embed it in the, in the derivatives that are created for access or, or etc. That's something to discuss. Um, on the create, creation side, you think about companies or organizations like Disney or National Geographic. Content credentials are an excellent mechanism, you know, that and watermarking, which is all, which is all part of the same conversation, um, and moving on, and, and, and this is moving beyond visual watermarking to, uh, non perceptible watermarking, to, to things like that, which are being paired with, with C2PA these days. And, and the, the value there at the, is, is about protecting your assets. Can you ensure that as this asset goes through its lifecycle, whether it's in your DAM, um, in which case you want your DAM to be C2PA aware or watermark aware. You want your DAM to read these files and report. The C2PA manifest is here for this asset, it's valid, and here's the history. You know that that's another way of securing your assets internally, but then as they go out of the company, whether into advertisements or whether out, you know, being shared to patrons or however they're being used. out of the company. You know, it's just another mechanism to ensure your, your copyright's there to ensure that you are protecting that asset and, and anything that happens to it's being directed back to you. Um, that's where on the creative pro, pro production side of the house, that's these tool sets that are being developed, that are really focused on ensuring content authenticity, they're, they're really being built for, for that need. Right? They're being built for the, for you to have some way to protect your assets as they're out in the world. That's why I come back to intent again. Gives you an, a, a, you who have an intent to, to, you know, to do this, the ability to do this.
Chris Lacinak:WhAt is the risk? Let's say that, um, these organizations that, you know, all of which are using digital asset management systems today, choose not to pay attention to content authenticity.
Bertram Lyons:It depends on what your company has, you know, what your organization collects and manages, but you know, with these generative AI tools that are out there. Content that makes it out of your company's hands, if it's yours and you created it and it has something that has meaning to you, um, it's very easy for someone to, if you don't have any protections inside of those files in any way, it's very easy for someone to, to take that, move it into another scenario and change the interpretation of it and put it back out into the world. This happens all the time, right? So, the, the what, the why there is about protecting, protecting the reputation of your, of your company. That's a big one. Um, The, the other why is about, I, there's a, there's a why that's not about, you know, the public. It's the internal why is increased efficiency and, you know, and reducing mistakes. I don't know how many times we've seen, um, companies or organizations that have, uh, mis, misattrib, have misattribution to what's the original of, of, of an object and what's the, you know, uh, uh, access copy. And some cases lost the original and are only left with the access copy. And the only way to tell the difference would be some kind of database record, if it exists. If it doesn't exist, you'd have someone whose experience has to do some kind of one to one comparison. But with the content credentials, um, there would be no, no, no, um, question at all between what's, what was the original and what was a derivative of that original. From a file management perspective, I think there's a lot of efficiencies to be gained there. Um, and then, and then, in essence, potentially reducing labor, right? So if, if you think about National Geographic, they have photographers out all over the world doing, you know, all kinds of documentary work. If that documentary work is, from the beginning, has content credential aware tools, there's, there's cameras out there, um, etc. Or if those , those photographers are then, or maybe they, maybe it's not, maybe the content credentials don't start at the camera, but they start at post process, right, you know, into, into Adobe. I'm not, I don't work for Adobe, I'm not trying to sell Adobe here, but I'm just using it as an example. But, know, it goes into a product like that, that is, that is C2PA aware, for example. And that photographer can create all of that useful provenance data at that moment, as it makes it to National Geographic, if their dam is C2PA, C2PA aware, imagine all of the reduction in typing and data entry that happens at that point. We trust this data inherently because it was created in this cryptographic way. The DAM just ingests it, creates the records, you know, updates and supplements the records. Um, there's a lot of opportunity there both for DAM users and for actually DAM providers.
Chris Lacinak:Yeah, so to kind of pull it up to the, maybe the most plain language sort of, uh, statements or questions that, that this answers would be again, kind of going back to who created this thing. So a bad actor edits something that's put out there, posts it, you know, maybe under a, uh, uh, an identity that looks like an entity Walt Disney, for instance, and is trying to say this thing came from Walt Disney. Uh, so this, this sort of suite of tools around content, content authenticity would let us know who actually created that thing and, and allow us to identify that it was not in fact, Walt Disney in that hypothetical. It also sounds like, um, the ability to, um, help identify, you know, something that's stated as real and authentic, whether it is in fact real and authentic. I've got this video, I've got this artifact, an image of an artifact. Is this, is this, is this digital object a real thing or not? And vice versa. Someone claiming, and I think we'll see more and more of this, people claiming that something that is real Is AI generated, right? That's not real. That's AI generated. That, doesn't exist. Uh, the ability to actually, in fact, prove the veracity of something as well, that's claimed to be non authentic. Um, does that kind of, those three, those are kind of three things that I think what we've talked about today points at, like, why would this be important for an organization to be able to answer those questions? And you can imagine in the list of organizations, we, we listed there that there could be a variety of scenarios in which answering those questions would be really critical to their, to those organizations.
Bertram Lyons:You give yourself ability to protect yourself and to protect your assets.
Chris Lacinak:Right. So, you have really drilled home today the importance of this ecosystem, um, that, that exists and kind of a bunch of people. playing, working, agreeing on, um, building tool sets around, uh, you know, an ecosystem. Are you seeing DAM technology providers opt into that ecosystem yet? Are there digital asset management systems? And I know you don't know all of them. I'm not saying to give me give us a definitive yes or no across the board. But are you aware of any that are, um, adopting C2PA, implementing Medex Forensics, or similar types of tools into the, into their digital asset management platforms.
Bertram Lyons:Not yet, Chris. I haven't seen a, a DAM company buy into this yet. Um, you know, to be honest, um, I think it's, this, this is new. This is very much emerging technology. Um, I think a lot of people are waiting to see where it goes and what the adoption is. Um, I will say that two years ago when I started working on, collaborating within the C2PA, uh, schema team, I was, I was feeling like there was very little chance of quick uptake. Um, you know, I, I thought this is a mountain to climb. This is a huge mountain to climb to get technology companies on board, to create C2PA aware technology, whether they have hardware, whether they're camera, phone companies, whether they're, whether they're post processing companies like Adobe, whether they're browser, you know, serve, like, services, uh, like Chrome, like Google, whether they're search engines, whether they're social media. I thought this is just, it's a, it's a, mountain. In two years time, however, I don't know if it was accelerated because of all that's happened with AI so quickly and the fact that, you know, interests have elevated up to the government level. We have a presidential executive statement on AI, you know, that mentions watermarking. Um, basically mentions C2PA. Um, in two years time, there's so much change that all of a sudden, um, that mountain feels a lot smaller to climb. It's like it can be done. Just in the past few months massive organizations have jumped into the Content Authenticity Initiative. Uh, from Intel to NVIDIA, uh, you know, important players in that ecosystem are now coming on board. Um, and I think that, I think that, you know, there's a chance here. So I, I think we will see DAM folks, uh, who provide systems taking a much stronger look. I will say in the digital evidence management, which we call DEMS, uh, community. there is, there is definite interest in authentic, authentication, right? It's already happening in the DEMS world, and I think that will bleed over into the DAM world as well. Um, in that content coming into these systems, it's another signal that the systems can automatically work with to populate and supplement what's happening behind the scenes. Um, and we, and we know that, that DAMS, DAMS work closely with anything they can to authentic, to, uh, automate, um, their pipelines and make things more efficient for the end user.
Chris Lacinak:So I know you've done a lot of work. Uh, we've talked about today, you know, the kind of law enforcement and legal components of this. Uh, we've talked about, uh, digital asset management within, uh, collecting institutions and corporations and things like that. But you've done some really fascinating work, I know, with journal within journalism. And within human rights stuff. And I would love, could you just talk a bit about that and maybe tell us some of the use cases that Medex has been used, uh, within those contexts?
Bertram Lyons:Think about the context of journalism and, and, you know, human rights organizations is, is really one of, of, of collecting and documenting evidence. Uh, and it may be on the human rights side, a lot of it is collecting evidence of something that's happened and that evidence is typically going to be video or images. Uh, so we have, we have people documenting atrocities or documenting, you know, any kind of rights issues that are happening and wanting to get that, that documentation out and, and also to have that documentation trusted so it can be believed. So that they can actually serve as evidence of something, whether it's evidence for popular opinion or evidence for a criminal court, you know, from the UN, both and all, right? So there's that, that's the process that has to happen. So there's, you know, often challenging questions with that kind of evidence, um, to document its authenticity. And in some ways, things like C2PA have come out of that world, you know, there's an effort with that WITNESS out of New York, um, worked on, and I know they had other partners in it, and I don't know the names of everybody, so I'll just say I know it's not just WITNESS, but I know that they've collaborated in efforts for many years to create these camera focused, um, systems that allow that authentic, authent, signal to be stored and processed within the camera upon creation. And then securely share it out from that camera to, you know, to another, another organization or location, um, with all of that authentication data present. And what I mean when I say authentication data there is like hashes and dates and times. Um, and, and, and usually to do it, and the more, the more challenging thing is to do it without putting the name of the person who created the content in the authentication. Because that's a, that's a. It's a dangerous thing for, for some people, for their names to be associated with the evidence of a human, of a human rights atrocity. Um, so they, you know, you think about that's a really challenging scenario to design for and human rights orgs have been really focused and put a lot of effort into figuring, trying to figure that out. So that you don't reduce the ability of people to document what's happened by making it too technologically challenging or costly. Um, and also you don't want to add harm to that. You want the person who's, who's created this to be noted. But then again, at the end of the spectrum, you need to trust it. You need someone else to trust it. But you can't say who did it, you can't say anything, you know, right? So, so there's been a lot of excellent work. And I know we've been involved a lot on the side of, of helping to provide input into authentication of, of video from, from these kinds of scenarios. Um, to add weight to trust, right? Ultimately, it's all around trust. Can we trust it? Uh, what signals do we have that allow us to trust it? And do they outcompete any signals that would want us to distrust it? Um, so that's, that's been really exciting. That work, you know, is, is continually going on. And I know there's a lot of organizations involved, but we've partnered closely with WITNESS over the years and they, they do excellent work. Um, and I know that there, there's a lot more out there, but you know, that's, On the journalism side, it's a little different than that, right? On the journalism side, you have, uh, journalists who are writing investigative reports, right? And their job is to, in a little bit of a different way, is to receive or acquire, um, documentation that's of world events or local events, um, and to quickly attain or assess the veracity of that content so they can make the correct interpretation of it. And also decide the risk of actually using it as evidence in a piece, in an article. Um, we work closely with a variety of organizations. The New York Times is a good example of, of a group we work closely with where, you know, it's not always easy. You know, even if you're receiving from a particular human being on, you know, in some location, you're receiving evidence from them, you know, you want to you want to evaluate it with as many tools as you can. You want to watch it. You want to look at its metadata. You want to look at its, uh, authentication signals. And you want to ultimately make a decision on, can we write, are we going to put this as the key piece of evidence in an article? It's never first person, right, from the journalist's perspective. They're not the first person usually, right? So they're, they're having to take this as, uh, from someone who delivered it to him, who is also, they can't prove is first person. You know, they have to decide how first person is the content in this, in this video or image or, or audio. So, um, I don't know if that answers your question, but that's, you know, we, you see a lot of need for the question of content authenticity in both of those worlds. And, and a lot of focus on it.
Chris Lacinak:Yeah. So, well, maybe just to pull it up to a hypothetical or, or even hinting at real world example here, like, uh, let's say a journalist might get a piece of video, um, out of, let's say Ukraine or Russia, uh, and they're reporting on, on that war. And, and, uh, they have gotten that video, let's say, through Telegram or something like that. Uh, so, their ability to make some, uh, calls about the veracity of it are really critically important. And I, they could use Medex and other tools to say, for instance, that, yes, this came, you know, if it looks like it's cell phone footage, that, yes, this came, this was recorded on a cell phone. Uh, uh, yes, this came through Telegram. Um, no, it was not edited, no, it did, it was not created through an AI generation tool or a deep fake, uh, piece of software, things like that. That would not tell them yes or no, they definitively can or can't trust it, but give them several data points that would be useful for making a judgment call together with other information on whether they can trust that and use it as, as journalism. Uh, in their journalism.
Bertram Lyons:That's right. Yeah, it's always the human. At the end, and I've stressed this, uh, as much as I like automated tools, we really need, in scenarios like that, a human to say, this is my interpretation of this, all of these data points that I'm seeing. Um, and, and that's a great example. And that's a real example. We actually dealt with it. Remember when the nuclear, um, when that war just originally broke out, there was challenges to, um, nuclear facility there. There were, um, it was still under the control of Ukraine and there were Ukraine scientists in the facility sending out Telegram videos saying we're here, there's bombing happening around this nuclear facility, this is extremely dangerous, please stop. Um, and, and the video was coming out from Telegram. But the only, the only way to evaluate it was, was from a secondary encoded version of a file that, you know, Um, initiated somewhere, uh, and then it was passed through Telegram to a Telegram channel and then extracted by news agencies and then they want to, um, as quickly as possible say is this real? We want to report on this. We want to amplify this, um, this information coming out from Ukraine. It's challenging, you know, in that case, you know, we, we, in the case of, in the files that we were asked to evaluate in that case, you know, we could say, yeah, you know, it was. It's encoded by Telegram, um, and, and it was, you know, it has, it has some signals left over that we're able to ascertain that, that, that would only be there if this thing originated on a cell phone device, on a Samsung, for example. Um, so there's the census, maybe that's all the signal you have, and you have to make a judgment call at that point. point Um, now. In the future, what if Telegram embedded C2PA data, you know, and, and that, and that was still there and we could, you know, maybe that's a stronger signal at that point.
Chris Lacinak:Yeah. Or combined. It's another data point, right?
Bertram Lyons:Yeah, it's just another data point, right?
Chris Lacinak:Great. Well, Bert, I want to thank you so much for your time today. Uh, in closing, I'm going to ask you a totally different question, uh, that I'm going to ask of all of our guests on the DAM Right Podcast, which Uh, help shed a little light, I think, into the folks we're talking to. Get out of the weeds of the technology and, and, and details. And, and that question is what's, what's the last song you liked or added to your favorites playlist?
Bertram Lyons:The last song That I added to my like songs was Best of My Love by The Emotions
Chris Lacinak:That's great. Love it.
Bertram Lyons:Ha ha ha ha You know, I mean, actually I've probably added that like 3 or 4 times over the years It's probably on there, different versions of it Um, that's great, great track, I used to have 45 of it. You know that track.
Chris Lacinak:Yep. It's a good one.
Bertram Lyons:I recommend you play it as the outro from today's DAM
Chris Lacinak:If I had the licensing fees to pay, I would. Alright, Well, Bert, thank you so much for all of the great insight and, and, and contributions you made today. I really appreciate it. And, uh, it's been a pleasure having you on the podcast.
Bertram Lyons:Thanks for having me, Chris.
Chris Lacinak:Thanks for listening to the DAM Right podcast. If you have ideas on topics you want to hear about people, you'd like to hear interviewed or events that you'd like to see covered, drop us a line at damright@weareavp.com and let us know. We would love your feedback. Speaking of feedback. Please give us a rating on your platform of choice. And while you're at it, make sure to follow or subscribe so you don't miss an episode. If you're listening to the audio version of this, you can find the video version on YouTube using at @DAMRightPodcast and Aviary at damright.aviaryplatform.com. You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak. And finally, go and find some really amazing and free DAM resources from the best DAM consultants in the business at weareavp.com/free-resources. You'll find things like our DAM Strategy Canvas, DAM Health Scorecard, and the "Get Your DAM Budget" slide deck template. Each resource has a free accompanying guide to help you put it to use. So go and get them now.