0:00
/
0:00
Transcript

The AI debate is fracturing the creator community

Daniel Kwan (of The Daniels) and I talk about the Creators Coalition on AI

JOE: Ready? Are we recording? All right. Hey, it’s Joe. This is Dan Kwan, filmmaker extraordinaire, Dan of the Daniels who made Everything Everywhere All at Once, amongst other films, and Swiss Army Man. We haven’t talked about Swiss Army Man.

DANIEL: Oh my god, you’re bringing up Swiss Army Man? You know this year is the 10-year anniversary of its release.

JOE: No, it’s not. Really?

DANIEL: Sundance, next week. Yeah, 10 years. It’s kind of crazy.

JOE: I’m going to Sundance. It’s going to be the 20th anniversary of Mysterious Skin. I got a 10-year-old, an 8-year-old, a 3-year-old. All right, so it was probably like six months ago, seven, eight months ago, you started talking to me about this idea for what has become the Creators Coalition on AI. And we’re having this conversation right now because we made the announcement. What was it, late December?

DANIEL: Late December, yeah.

JOE: Yeah, we were gonna launch a little later.

DANIEL: Yeah.

JOE: And now we’re flying the plane as we build it—

DANIEL: I think that’s the only way to do anything these days.

JOE: It merited a response. So we announced just towards the end of last year, and we wanted to have a follow-up. And, you know, there’s been like a lot of really lovely positive encouragement and support for the Coalition. A lot of people have signed their names, and then there’s been a lot of questions. So we thought we would just kind of record a conversation where we answer a bunch of these questions.

DANIEL: Yeah, it’s really important to us. I think transparency is one of the hardest things to be fighting for right now when it comes to conversations around the tech industry. And it only feels right that if we’re gonna be fighting for transparency, we should also be modeling it. And so we at the CCAI—I’d like to believe that these conversations—we should be having a lot of conversations. Like, right now the problem in Hollywood is that a lot of conversations are happening in secret and behind closed doors. Yeah, and everyone is kind of having these narrow conversations in isolation. And I believe that we’re not going to be able to move fast enough if that’s the way we do this, because obviously this is very scary. There’s a lot of potential risks. There’s a lot at stake. And I wanted to create CCAI with a bunch of other like-minded people, specifically to create a space where we could have these hard conversations between a spectrum of different voices and a spectrum of different experiences. Because if we’re only having these conversations in two places, and those two places are either behind closed doors or online—you know, where, even with the best intentions to have a nuanced conversation online, it’s going to get swirled up by the algorithm and then chewed up and polarized. And I knew that this conversation was too important to risk that kind of outcome.

JOE: So yeah, and frankly, that kind of polarization only plays to the advantage of the predators. Yeah, frankly, because I think there’s a divide-and-conquer strategy. They want us to not be talking—

DANIEL: Or to be fighting the whole time.

JOE: And that’s fighting amongst each other.

DANIEL: Yeah, fighting each other instead of fighting the people who actually have the power and the control over how this technology is being deployed. Which, most of—you know, if anyone’s been following my journey into the AI world—I believe that the way that this technology is being deployed is completely wrong, and we really need to be pushing back on it. And I believe there is a better way. We just need to work together to make that happen.

JOE: Yeah, and I agree with you. And at the same time, I’d like to maintain some kind of optimism, because I think the technology itself has the potential to be great. But the way it’s being—especially turned into businesses nowadays—is leading us down some potentially really dark paths. And that’s why we have—I think we have the time now. Now is the time. If we can have these conversations, we can hopefully do some course correction and be like, let’s take this technology that could be incredible but is currently being leveraged in an ultimately damaging and power-concentrating way. Yes. But if we can course-correct, maybe it could be something that genuinely is good for everybody.

DANIEL: Yeah. Yeah. Yeah. I would love to talk more about that kind of dichotomy. Like, you know, we spend so much time talking about the risks, it’s really hard to talk about the benefits. And I think that’s for good reason, because—this is a bit of a crude analogy, but I’ve been using it recently and it’s been helpful in conversation. When you’re in a relationship with someone and the other person suddenly wants to invite a third in, like a throuple, you know, a threesome, right?

JOE: Polyamory.

DANIEL: Polyamory, or just a one-night thing, who knows. If your core relationship is not one that has foundational trust and ideas around consent, ideas around a shared understanding of what you guys are stepping into, bringing in a third is incredibly—it’s chaotic and dangerous. And no one—

JOE: I was so tempted to ask if you’re speaking from experience, but I’ll skip that.

DANIEL: But you don’t want to talk about the fun stuff, right? Unless you feel—unless you have some trust. And our industry is trying to talk about the fun stuff. We’re trying to talk about the positive, we’re trying to talk about all the benefits this could have. But we haven’t even established basic rules of trust and consent. And until we have that safe conversation, I don’t blame people for their knee-jerk reaction to be like, hell no, I don’t even want to hear about the positives. So I just want to acknowledge that I do believe that there are positives, but there’s so much work that we collectively need to do together to make each other feel safe before I feel like, all right, let’s do that. Yeah. This is me, personally.

JOE: Yeah, which is why probably almost all—most of the time I’m raising my hand or making a video about AI, yeah, it is more about the concerns. Yeah. But yeah, it is important not to get too completely pessimistic. Ultimately, if you want to head in a positive direction, you’ve got to have your eye on that too. But do you want to talk a bit about what we’ve done so far, what the announcement said, just for those that maybe hadn’t seen the announcement a month ago?

DANIEL: Yeah, amazing. So we’ve spent the last, you know, six or seven months—a group of us, filmmakers from all different areas of the industry. We have actors, producers, writers, but we also have VFX artists, we have voice actors, we have people from tech-adjacent spaces who understand the technology but are very critical of how big tech is implementing it.

JOE: I’d also add there’s quite a few people who have shown a lot of enthusiasm who are not in the Hollywood film and TV world, because I think that this is just as important, if not more so, on YouTube, for example, or in the podcasting space, etc., etc.

DANIEL: Exactly. Because of how decentralized this problem is and how widely distributed this technology is, to only have a conversation about Hollywood would be foolish. And so we’ve also been intent on inviting in and having conversations with online creators and things like that. But the initial impulse was to bring everyone together on the same page because my—one of my fears is we continue on the default path.

JOE: Yeah.

DANIEL: If we don’t coordinate. The path of least resistance looks something like this, because we’ve seen this happen and play out in other industries. We’ve seen it happen in our own industry. It’s one in which the tech industry comes out with a new shiny toy, and there are a lot of really exciting, interesting things about it. They deploy it onto an industry, and it disrupts things, right? It’s the “move fast, break things” model. And at first, the relationship’s really wonderful and exciting. I remember when Uber first came out, I was like, oh my gosh, so affordable, so convenient. This is amazing. But what ends up happening is the disruption of the model really breaks some fundamental things, like protections, the ethical concerns around any of these technologies, in a way that allows the tech industry to consolidate a lot of power and a lot of control. They hold all the cards. Once they capture a large enough market share, no one else can compete. And then once that happens, they dictate the rules. They set the terms.

JOE: And Uber is now high-priced, whereas when they first started, they had these low prices to get everybody hooked.

DANIEL: Exactly. It’s high-priced for the user, low wages for the driver. And on top of that, the drivers—if you look at the taxi industry, that used to be an incredibly strong, powerful job. The labor protections around that were really powerful, and the requirements to get into that field were really high. You had really incredible drivers who knew how to do their job. Now—and this is no offense to any Uber drivers—but some of you are terrible at driving. Some of you, like, it’s stop-and-start, it’s stop-and-start. It’s like, the number of times I feel like—

JOE: And the drivers are really poorly treated—

DANIEL: Exactly.

JOE: I was in an Uber the other day, driving from the airport, and it cost, I think, something like $90-something to get to my house from the airport. And the Uber driver said to me, “What is it charging you?” I said, “It’s $90.” He said, “You know, I’m getting $30.” That’s crazy. Doing that whole drive, and he’s only getting a third of the money.

DANIEL: So they’re being mistreated. We’re getting a worse service, and we’re paying more, and a lot of it’s getting siphoned up to the tech companies. And this has happened with Spotify and musicians. You can see what happened with Airbnb and housing. Even when you look at streaming—like, obviously it’s a very complex thing that happened to us—but when we chased after what the rest of the tech companies were doing within our industry, we accidentally created a streaming bubble that devalued our product, our stories, in a way that changed the relationship our audiences had with the theatrical experience, which suddenly made some of our business model no longer make sense. And so now we’re struggling after that pop, in a way where less productions are happening in the U.S. Budgets have really gone up into places where it’s really hard now for people to make movies and for audiences to come and actually support them. And so, my—the default path—this is a long-winded way to say the default path is one in which the tech industry sets the terms for our industry, and suddenly the creators are no longer at the table and no longer have any power or any agency within our careers and within our industry.

JOE: But the truth is that these tech companies need the creativity of humans. Their products don’t work. Their generative AI services don’t generate anything at all without all the content and data that they have taken from human creators.

DANIEL: Exactly. Exactly. And what’s really interesting about this situation is we’re in this rare moment where I believe, no matter who you are within our industry, there are aligned goals and aligned values that we could actually gather around and fight for. And even though our industry is normally divided between the labor and the studios—and that’s a very important part of our industry, the ways in which we negotiate between those two parties—we’re in this unique situation where if the unions are only negotiating against the studios, that will solve a lot of really important problems. But the bigger problem is how the tech industry is releasing this technology, not just to our industry, but to the rest of the world.

JOE: Yeah. That’s why I like using the word “creators” for this Creators Coalition, because whether you’re a creator in the sense that you’re an individual artist, or you are a union like the Screen Actors Guild or the Writers Guild of America, or you’re even a studio—or perhaps, and I think honestly maybe even most importantly, you’re a creator that’s not affiliated with the traditional Hollywood film and TV industry. You’re making your—you’re doing your thing in the larger digital creator economy right now. We’re all facing a common problem.

DANIEL: Exactly. And so with the Creators Coalition, my hope—and my aim, and all of our founding members’ aim—is to find a way to bring together the like-minded people who are the filmmakers and the crew members and the agencies and the executives and the people who are working at the studios, really waking up and realizing we actually collectively need to ensure that we are the ones setting the terms for our industry and not allowing the tech industry to do that for us. That being said, we’re also going to extend even further. Our cohort and our community will also have to extend out to the tech people who also want to be doing this right, because that’s something that’s really important to acknowledge: the tech industry is not a monolith. There are obviously a lot of terrible incentives and a lot of really big corporations that are doing a lot of things that I believe are damaging to our world. But there are so many people who work within tech—many people who I know personally—who desperately want the work that they do to matter and to be done right. And they are also looking at what’s happening with the tech industry in horror. And so our strange, nuanced, coordinated effort to reach out to all these different kinds of parties across many different dividing lines, to me, is where we will find strength. And the important thing I always try to tell people is, right now we have to realize that the dividing line isn’t between the labor and the studios. It’s not between the film industry and the tech industry. The dividing line is between those who want to move fast and break things or those who want to slow down and actually get this right. And so that’s sort of, you know, to address one of the—

JOE: Can I interject on that? Slow down and get things right? This is what we do when we talk every week. Like, I don’t necessarily agree with everything you say, you know. Necessarily. I usually bristle at the idea of slowing down, because to me there’s this false dichotomy. You get these Silicon Valley VCs who will call themselves accelerationists and say we have to go fast, we have to go fast, and all these anti-tech people just want us to slow down. I think we can go fast. We should go fast. The question is, which way are we going? Can we also steer while we go fast? Can we steer towards good solutions? We have to be building, not stopping building. We’re building good, pro-human, pro-society solutions, and that requires going fast, not necessarily—I mean, you could say slowing down, let’s take stock, let’s be careful, let’s be thoughtful in how we do this. But I don’t think the right answer is, like, let’s just stop.

DANIEL: Yeah. Yeah. Totally.

JOE: I don’t know how that actually works in the room—

DANIEL: Yeah. And we can—I think we can break down all the reasons why right now a full-on stop isn’t viable and actually could hurt a lot of things.

JOE: That’s one of the questions I want to get to. Maybe before we—because that was one of the kind of frequently asked questions I want to answer in this conversation. Before we do, there’s two things I feel like we should cover before we get into the questions. One is, what are we gonna do about it? We, meaning not just you and me, but this coalition. Where, you know, it started with a core group, grew into a bigger group. Now there’s many thousands of people who have signed their name to it. What can actually be done?

DANIEL: Yeah, that’s a great question. I just want to circle back and put an underline on one thing that you said, because I think it’s important. The fact that you and I are disagreeing right now—yeah—is so important, because that is why the Creators Coalition is important. We want to create a hub where we can have these really hard conversations with nuance. And so when people look at our roster and they see a kind of bizarre spectrum of bedfellows, that is the core of what we’re trying to do. Because I believe that the disagreement is where you will find—it’s a lot more difficult, obviously, but it’s going to be the way that we actually—you know, if there is one way to get this right, and I don’t think there is one way, but if we are going to ever be able to get this right, it’s going to be through a lot of hard conversations and disagreements on our way to finding alignment. So, I’ll just say that.

JOE: And some compromises.

DANIEL: So many compromises.

JOE: Not that this is democratic exactly, but it’s not—you know, even though I would say that you sort of started this, you’ve always been really specific in saying, yeah, but I’m not the director, the dictator, the CEO. Let’s do this in an open and collaborative way. And even though I joined early, I don’t want to see it as, well, that means that my opinion matters more. The way that you’ve always talked about it, and I really agree with doing this, is sort of, yeah, this kind of open-source, open, collaborative, pluralistic, cooperative way.

DANIEL: Exactly. So it involves disagreements.

JOE: So it involves disagreements. And compromises. Okay, so to that question, the question of what can we actually do? What do we think this coalition is gonna do next?

DANIEL: Yeah. So the strategy is constantly evolving, just because it has to be. It’s constantly evolving. The more conversations we have with different parties and different people and different perspectives—we like to listen. It’s very important that we’re listening, and it causes things to evolve. But right now, the general framework is, first, step one: we have to unite and use our collective power. And that goes beyond just the unions. We have to do unions, agencies, studios, content creators, YouTubers, beyond Hollywood.

JOE: Everybody who makes things.

DANIEL: Because even though we are very different and we have very different jobs and very different wants and needs, we have a lot of common overlap around what this technology is going to do to us now and also in the future. And so number one: unite. Number two: we’re going to have to build. And what I mean by that is, this technology is fundamentally incompatible with our current systems and institutions. Like, all the laws and the rules around how our business model works—this technology has the potential to break a lot of it.

JOE: And all those rules were put in place before this technology existed, and this technology is so new that those rules don’t necessarily make sense.

DANIEL: Exactly. And so we have to really rebuild and transform a lot of these systems in a way that can actually hold this technology and bind the technology so that we can responsibly use this technology. Meaning, how do we mitigate the worst risks? How do we really—and there’s a long list of things that we need to do for that to happen. But also, how do we get the benefits out of it? And I do believe there are benefits worth discussing. Even if you’re someone who absolutely hates AI, it’s worth noting that we’ve been using versions of AI since Lord of the Rings, or even before, with crowd simulation and stuff like that. So, building—

JOE: Yeah. Not just saying, hey, here’s a bunch of problems, but let’s try to build some solutions.

DANIEL: Build some solutions together. Because—and on our terms, not on the tech industry’s terms. On our terms. How do we build them on our terms? Okay. So: unite. Build. Third is, once we’ve built a system that shows us what is the right way to do it and what’s the wrong way to do it, we need to be fighting back against those who aren’t going to do it the right way. And I believe that there is a version of this where we can use our collective power to litigate through lawsuits, use legislation, especially on a state level. If you think about California—Silicon Valley and Hollywood—we could be really using that lever.

JOE: And then there need to be laws.

DANIEL: There has to be laws.

There’s no laws.

JOE: There’s no laws! There’s laws for every industry. There’s more laws, they say, in how you make a sandwich or sell a sandwich, exactly, than how this world-changing technology—

DANIEL: Exactly. And I believe we are going to be able to do that better if we’re doing it together. And the last thing is, we need public pressure. We need to—because I believe that this technology, specifically Gen AI, for video, for audio, for text, it’s already shown how dangerous it is. The fact that the same technology that they’re pitching to us as a tool for creativity and unlocking our imaginations is also being used by bad actors to completely alter reality through deepfakes. Right? Like, this affects everyone. And I believe that if the government’s not going to regulate Gen AI in that way, someone has to. And I believe Hollywood—we still have some leverage, and we still have some influence, and we would love to be the tip of the spear to lead the rest of society against the ways in which the tech companies have decided to deploy this. So public pressure, bringing people outside—anyone outside of our industry who is deeply concerned—to come join us and push back together.

JOE: So that’s all part of fight. We had number one, unite; number two, build; three, fight—fight through litigation, legislation, and public pressure.

DANIEL: Exactly. And number four is, this is not just happening to Hollywood, and we believe that we need to be connecting with other industries, and we need to be in some ways learning from what the teachers are doing in their industry, learning from what the doctors are doing. And on the flip side, Hollywood can be a model to other industries as well, to show them how to unite, build, and fight.

JOE: I remember the first time you talked about this, you bringing up that point, and that was really what made me say, oh, you know what? I really do want to team up and get involved in this. Because as much as I deeply care about movies and art and creativity—and I do, it’s not only how I earn my living my whole life, it’s something that I just am deeply passionate about—it’s, you know, the life of an artist is something that means more to me than maybe is even rational. But there is something much bigger going on here than film and TV. And if we’re gonna be spending all this time and effort trying to work to build new, better systems and fight back against the path of least resistance, to me, it’s got to be about more than just movies and television shows. It’s about the way the whole world is about to unfold over the next X number of years.

DANIEL: One hundred percent. Because the tech industry has so much consolidated money and power and influence at this moment, I believe that if the teachers are pushing back at the same time that the truck drivers and the filmmakers and the doctors—if all of us are collectively uniting, building, and fighting together at the same time, it’s going to be really—it’s going to be really hard for the tech industry to ignore us.

JOE: That’s our best shot of a bright—if we’re all doing it together.

DANIEL: Exactly. Especially because right now—this is just some extra added context—you know, for many different reasons right now, we can’t rely on federal government, because we are locked in a geopolitical global arms race between the corporations and the nations to reach AI supremacy. Which is something that needs to be repeated often, because it feels like we are in an arms race that is incentivizing these companies to deploy as fast as possible.

JOE: I mean, this is a tangent, but I think the geopolitical arms race is a bit of sleight of hand. I think it’s more of an excuse to just let businesses make money and consolidate power.

DANIEL: Exactly. And so the problem is, right now, because we can’t rely necessarily on the federal government—I do believe that collectively, if we are doing this kind of coalition building across many industries, we actually have a shot of pressuring not just the tech companies, but eventually maybe the governments. And then also, if you’re in Singapore or Australia or Indonesia, it doesn’t really matter. We should all be kind of pressuring the people at the top to find a way for us to push back against the arms race, find a way for global coordination to maybe actually happen. Yeah. Huge tangent. Sorry. Let’s answer some questions. Okay. I would love to ask you the first question. Okay. So this is a question we’ve been getting a lot, and it’s totally fair. Is this just a bunch of A-listers who are trying to protect themselves? Like, how is the CCAI dealing with the fact that this is impacting crew members and people below the line and just so many people who—yeah, who don’t have the platform that, you know, you do—

JOE: And people that don’t work in Hollywood. Yeah. I completely understand this. Here we are making this video. An actor that’s been in successful movies, and you’re a filmmaker that’s made a bunch of successful movies, and it could very well seem like we’re trying to be gatekeepers. Yeah. And I really understand, especially from the point of view of someone who wants to make movies who has no access to Hollywood. Yeah. Say you’re just like two young people with no money to speak of. These tools could potentially give you the power to make movies that look as high-budget and high-production value as The Avengers or whatever else. I want that. I want that to happen. And I root for that. I, you know, believe it or not, as much as I feel so lucky to have gotten to be on some of those big sets making those big movies, I’m a hundred percent rooting for the two kids out there who can make the next huge blockbuster movie. We want that. But I also do think that unless we establish proper principles of how these tools can be used, how the economic upside of these tools flows, even those two kids that made the crazy blockbuster with no budget using these tools, they’re gonna get taken advantage of too. They’re next—we’ll all get taken advantage of. There will be no human creator spared if we don’t stand up for these principles. So I think it’s actually really good that we’ve been getting these questions. I think it’s really important that this group, this coalition, that frankly did start with some Hollywood-established people, not be Hollywood preservationists.

DANIEL: A hundred percent.

JOE: And, you know, we talk about that in our discussions. And to me, I’ve been reaching out to various people I know who work outside of Hollywood, who are working on YouTube or working on other platforms, and I really think those perspectives are important. And let’s be honest, Hollywood has not been an unequivocal good for society. You know, I consider myself very lucky to have gotten to work in Hollywood, and I think that the Hollywood film and TV industry has yielded some great art. There’s been really great things for the world. But let’s be honest, it’s also yielded a bunch of crap and a bunch of bad influence. And it is exclusive and cliquish, and it’s hard to get into the Hollywood industry. And I don’t think we should be trying to preserve every single thing about the Hollywood industry. We want, ultimately, to get towards a future that is more open and that is more empowering to more and more creative people. And I do think this technology could be a part of that. But if we go down the path of least resistance, that’s not what’s gonna happen. Yeah. What’s gonna happen is not a diffusion of power. What’s gonna happen is a further concentration of power, where all the power goes into the hands of just like four or five of the biggest AI companies.

DANIEL: Yeah. One way I’ve heard someone say is like, “democratization.” Like, they were democratizing this tool, right? This tool is democratizing how filmmaking is made. But democratization without distribution of the power and the wealth and the money is basically making it so that no one, except for a very few people, are gonna be able to have viable jobs where you can make a living and, you know, take care of your family. In the same way that when Uber took over the taxi industry, a lot of people moved away from the taxis, where they had proper labor protections, into the Uber ecosystem, where there’s very little or no protections.

JOE: And—has that been good for the drivers?

DANIEL: It hasn’t been good for the drivers. Exactly. So what could happen with us is, if we just allow—you know, again, this is not about gatekeeping, but if anyone and everyone is just using this technology without first establishing these rules, I believe more and more work is going to be leaving our industry. And the unfortunate thing about that is, one of the things I do want to protect about our industry is we have very strong labor protections. We have some of the best guilds and unions in the country. And we want to be able to—I would love to preserve a world in which working creators and creatives and crew members could actually make a living and could support a family. In a world in which we just allow this “democratizing” tool to be released everywhere without proper safeguards could mean the collapse of a lot of that protection and a lot of the strength that we have when we’re actually coordinated together.

JOE: The taxi industry is an interesting analogy, because it’s also worth acknowledging that the taxi industry was protectionist. It was sort of exclusive. There was corruption. There were problems with the taxi industry that deserved to be fixed and disrupted. And do we have a better situation now with Uber? I actually think that’s debatable. I don’t think it’s a hundred percent clear that the world was better before Uber, but I do think it’s clear that Uber is unfair and exploitative and there should be more protections for the people who drive Ubers. And if that means that Uber has to eat into its revenue and stock price a little bit so that it can pay fair, living wages to the people who drive the cars, well, so be it. And that’s actually what laws are for. And the tech industry does have a history—and Uber is an example of that—of getting around those kinds of guardrails that have been put in place to protect the more vulnerable and less powerful people of society. That’s what the labor movement was, you know, back before the New Deal, during the climb of the Industrial Revolution. There was no such thing as labor laws. And big businesses could put workers in really unsafe conditions, and they could work them past an eight-hour day, and they could—you know, there needed to be, ultimately, a labor movement of people coming together and saying, “Hey, we’re not gonna let you just take advantage of us one by one. We’re all gonna get together, and as a unit, negotiate for a fair workday and safe conditions and no child labor.” And those are good things. We want to keep that. And it does feel like in this day and age, in Silicon Valley, there’s sort of this libertarian bent that’s like, “No, there should be no curbing of business. Businesses should just be able to do whatever they can get away with.” And that’s not a world we want.

DANIEL: Yeah. I mean, it’s very apt. The last time we had a technological industrial revolution, we had to rise up and recreate our world and rebuild all of this stuff. And to your point of Hollywood is a flawed system—you know, I’m very lucky that I also work in this industry. But I do believe that AI is forcing a conversation on—and AI, mixed with all the other things that we’re dealing with, like, you know, studio mergers and the fact that all work is leaving the country—there’s so many things right now that’s forcing the industry to have a bit of a soul-searching conversation about who we want to be and what we want to transform into. And I—that’s actually, to me, one of the beautiful things. Like, how can we protect the things we love about this industry but also do away with a lot of the other problems? And I believe AI is part of what’s forcing those conversations. And to kind of wrap up this answer—this kind of answer of, are we just A-listers protecting ourselves?—it’s why it’s really important that one of the first things the coalition has done is actually, we brought together leadership from all the guilds and unions into one room so that we could actually be discussing these things. And so we have the DGA, the WGA, SAG, IATSE, Teamsters, Producers United.

JOE: And those unions don’t just represent the A-listers, quote-unquote.

DANIEL: No. Ninety percent of those bodies are everyday people. In fact, a lot of blue-collar, working-class, middle-class people who are just trying to make a living. And even on our founding board, we have people who represent that side of the conversation as well—the people who are really scared about what this technology could do to their job, right? And so I know obviously having them in the room doesn’t necessarily guarantee that things are going to go well, but the one thing that really gives me some confidence in what we’re doing is that, individually, the people—all the people that I’ve been talking to through the process of building this coalition—all have big hearts, and they’re dedicating a decent chunk of their life and time for free to ensure that this doesn’t go down the default path. And so having people like us, but then also having people who are VFX artists or voice actors, you know, people who are usually overlooked and neglected in these conversations—they’re actively sitting next to us in these conversations.

JOE: And one thing—I’m repeating myself a little bit, but I think it bears repeating—that I do think we need to do better at, and we really will, is not only having those folks that are less visible, like you’re talking about, in our industry, but people outside of the Hollywood film and TV industry. Because the digital creator economy is—let’s face it—it’s bigger and more important than the Hollywood film and TV industry—

DANIEL: Yeah. I mean, and that’s also why you’re such an interesting person to be a part of this, is because you have such a rich history with that group through HitRecord and things like that. That’s part of your origin story as well. And so having you on the team speaking for those people has been really—I think it’s all—it’s all important. It’s essential.

JOE: But I would say anybody out there watching this who is a digital creator, yeah, who doesn’t work in Hollywood, who has thoughts about this, who thinks that you can bring a perspective that we ought to be exposed to—like, come. Let’s get involved, and let’s have that conversation.

DANIEL: Awesome. Next question.

JOE: All right, I’ll ask you one. Let’s see. Okay. Right. So another thing we’ve heard from a different contingent of people is, how can you be in any way accepting this technology? Are you endorsing this technology? Are you just allowing the big corporations to use this technology? Why aren’t you just advocating for a full-on ban of this technology?

DANIEL: Yeah. That’s such an important thing to discuss, and it requires sort of a longer discussion. Let’s see how much I can get through here. Because in some ways, I’m someone who doesn’t want to be using this technology. Like, you know, I know there are people in our group who are using it, some people who are excited to use it. I’d say 99% of applications, I’m like, I don’t know if I need it. But I’m still here having these conversations, and I’m still here pushing, because the reality is the technology is already here. And even if you think it doesn’t work and it’s never going to be viable for our industry, one of the things that I’m really worried about is the fact that the technology doesn’t actually have to be that good for it to do a lot of damage. Is McDonald’s food that good? I mean, I mean, objectively, I don’t—I love McDonald’s. I shouldn’t be talking about—yeah, but—

JOE: That sweet and sour sauce?

DANIEL: Exactly. But objectively, is it good?It’s—it’s—I don’t even know what it’s made of half the time. But did it take over the world? Yes. Did it fundamentally change our relationship with food as a country? Of course. The leading causes of death are heart disease and obesity. Not only that, but the industrialization of food has caused untold damage to our climate and to our planet. And so it doesn’t actually have to be good. The technology—like Gen AI—doesn’t have to be able to compete with Oppenheimer. Right? It just has to be good enough for the general public when it’s free and easy to access. What is that going to do to everything? I’m worried about that.

JOE: Can I interject for a second? Because I do feel like it’s an important point of clarification. When we talk about art and creativity and AI, it’s the difference between good or great art versus content that can move numbers. Yes. That’s very important. And I think it’s a philosophical and highly debatable question of whether AI will ever make great art. But will it generate content that moves numbers? Probably more effectively than any human. Yeah. I think it already kind of can. And soon, because it’s getting better and better and better, it will absolutely destroy any human in the ability to move numbers. And what that means is, if you’re a business, if you’re YouTube or you’re Netflix, and you’re thinking, hmm, I could either pay a human to try to achieve my business goals, or I could employ this technology to achieve my business goals. Well, it makes a lot more business sense to use the machine that can move more numbers.

DANIEL: Exactly.

JOE: And of course, the lie is that the thing moves numbers without humans. Yeah. Because the truth is that the thing can’t do anything without its training data, which was stolen without consent or compensation.

DANIEL: Exactly. But I bring that up because a lot of people think, “Okay, we’re in a bubble, or the technology is not good. Why do we even use it?” If we believe that if we just demand an outright ban, that this thing will go away. And while I am worried about all the same things that anyone who’s asking for a full-on ban is worried about—I believe in the same things, I cherish the same things, I want to protect the same things—but I believe that we are going in a different direction. You know, I’ve spent the last three years in this field. My team has talked to countless people in and outside of the industry.

JOE: The documentary is so great.

DANIEL: Our documentary is coming out at Sundance next week. It’s called The AI Doc, or How I Became an Apocaloptimist. I was just a producer on this doc. We had a great team. But even if we’re in a bubble and it bursts, the market valuation will drop, but the capabilities are still gonna be here. Just like when the internet—the dot-com bubble burst—the Internet still came back and took over everything, and we all saw how that went. Yeah. And to me, even if I’m wrong, I would rather be able to know that we are working towards something, just in case—building a version of our industry and our world that can actually give people an option to use this the right way. Because whenever I talk to anyone within this space—labor activists who’ve been doing this for a while—you always want to find a way to reward the good actors and penalize the bad actors. Right now, we don’t have a way to delineate between those two things. And so, to me, full-on bans never work on any technology. It’s just like—over and over again, you have to find a way to give people the safe option, right? You have to give people the right option. Otherwise, everything’s gonna go underground, and people will just do it in this unregulated space or whatever. And so I believe that, unfortunately, it is here. Even if the technology never gets any better than it is right now, it’s enough to be effective. And asking for a full-on ban will get you laughed out of the rooms when you’re trying to talk to the people who are actually navigating this space.

JOE: Yeah. You’re kind of placing yourself on the sidelines.

DANIEL: You’re putting yourself on the sidelines. And so, to me, I’m trying to keep—for this one little moment of time, I have some influence and some ability to open some doors—I’m trying to keep that door open as long as I can to bring in the voice actors and the VFX artists and the animators into that room, so that when the people who are normally making these decisions about how this technology is used, they will have some sort of voice and some sort of seat at the table. And so, while I don’t believe a full-on ban on Gen AI is ever going to happen, there’s so many ways in which we can have nuanced conversations to build a better version of this. Yeah. Do you have any other color to add to that?

JOE: No.

DANIEL: All right. So, you know, we throw this word—ethically—you know, standards of ethics around AI. What is your benchmark for ethically used AI? What does that even mean?

JOE: It’s a complicated question, and there’s probably not a simple answer. And part of what I think we need to build is a complicated and multi-pronged answer to that question, to provide what are all the criteria that count as ethically sourced. But I will talk about one thing that, to me, I talk about a lot, and is what I would put at the top of the list. And it has to do with this sort of sleight of hand, this magic trick, that the AI companies have pulled with what they call artificial intelligence. Because the name “artificial intelligence” makes it sound like, “Oh, artificially it can make things, it can write things, it can make videos.” But it can’t. Generative AI doesn’t generate anything on its own. The way these products are built is that the tech companies take enormous amounts of content and personal data from people. They take every video on YouTube, they take every book ever written, they take every article on the web, they take every song ever put out, they take every movie ever made. They take everything—home videos, your birthday pictures, your vacation photos.

DANIEL: It’s like everything.

JOE: Yeah. Take everything. And they feed that into an algorithm that then turns it into data they call tokens. And then it calculates the statistical relationship between those tokens. And it can generate these pretty incredible outputs. Yeah. But those outputs are impossible without all the human inputs. Yeah. And right now, the AI companies are taking the position that they can just take all of those human inputs without permission, without compensation, without controls, without transparency. And so, to me, this is one of the main frameworks we have to get right in order to establish an ethical AI model. When—and I’m not saying don’t use the technology. I’m just saying, how is it going to work? When the AI company generates an output, there should be consent, there should be compensation, there should be some controls. You should be able to say, all right, no, I—”You know what? You can’t use my video that I put up on YouTube. “Or, “Yeah, you can use my video, and here’s my price.” Yeah. Or, “Here’s my video, and here’s my price, but you can’t use it for anything pornographic or anything political.” You should—people should have control. And then a sort of market dynamic can emerge where the AI companies are needing to pay and compete for human content and data. And to me, this could set us up for a bright future where there is still economic reward for human creativity. And I don’t mean economic reward like just big Hollywood hotshots are getting paid huge amounts of money. I mean anybody that has a good idea about anything, whether it’s in entertainment or journalism or academia, science, medicine, whatever. Right now, the AI companies are gunning for a future where any good human idea is allowed to be taken, monetized without permission, without compensation. And where does that lead? If you take that a few steps down the road, that leads to a world—and they say this out loud, they say it—they say, “Our models are going to do all of the economically valuable work. The whole GDP, or something very close to it, is going to flow through our company.” That is their explicit goal. And that is totalitarianism. That is us going back to a world where a few kings and warlords own everything, and everybody else are just serfs working on the king’s land. You know, that’s not—it’s not only unjust, it’s not even good if you’re a business person. Even if you’re not a bleeding-heart liberal like me, even if you’re like a full-on “let’s have a strong economy” person, it’s not good for a strong economy. Empowering a diffuse economy where people can make money, that’s what makes a strong economy. That’s why, historically, the United States became the economic superpower that it did, because back in 1770-blah-blah-blah, we said, “Nah, we’re independent from the king. We’re gonna have ownership amongst the people.” And of course, it didn’t happen overnight. And at first, it was only ownership by white men. And eventually, that had to, over time, change. And it didn’t happen fast enough, and it still hasn’t happened all the way. And these things take way the fuck too long. But we have to get it moving in that direction, because right now we’re moving back towards a kingdom where the digital world is a kingdom run by like four or five kings of these big AI companies. And it gets dark, to me. To answer your question, “What’s your version of ethical AI?” It would be an AI model that offers consent, compensation, controls, and transparency to all the people whose content and data is being used.

DANIEL: Yeah, yeah, yeah. Man, I got so much to say about that. I want to answer this question of ethics. But I do think a follow-up question that people talk about when—because you’re talking about inputs, right? You’re talking about training data on the input side. How do we make sure that people are fairly compensated? A question that a lot of people—you know, I speak to people at different guilds and different organizations who are worried—is, how do we ensure a world in which, even if we fight for compensation on inputs and outputs, just copyright in general, how do we ensure that the people who are benefiting aren’t just the copyright holders? Right? Because the copyright holders are the businessmen more often than not. Even most of my copyrights are owned by studios.

JOE: The movies I worked on, too. It is different when you’re an actor because you own your face, your likeness. Yeah. So you get likeness.

DANIEL: And what happens to the cinematographers, the costume designers, the gaffers, anyone who is working on these things that contribute? Because hundreds and hundreds of people contribute to every single film. How do we ensure that that is actually, you know, fairly distributed? This is an open-ended question, but I ask the question because I know this is something you care about and think about when you’re talking about things like data dignity.

JOE: Yeah, it’s true. And again, there’s not a simple answer, and it goes to the need for upgraded systems. Because the truth is that all of us who agreed to work on movies and give up any ownership of the copyright of that movie, we all entered into those agreements before this technology existed. We never contemplated that when we were signing these contracts, the fact that we signed the contract would be used against us so that our work could be fed into an algorithm and put us out of a job. And so, to me, there probably should be some kind of legislation. And again, this goes way beyond the movie industry. What about doctors who spend their career working in medicine and generating all these medical records? They don’t have any ownership of those medical records. And then those medical records are gonna be fed into an algorithm to put them out of a job. Like, this is gonna happen in so many different industries, and we need to somehow get ahead of it and acknowledge that this technology is new. There’s never been anything like it. And copyright is old. And there are certain principles, I think, that copyright stands for, and we can continue to stand up for those principles, but the details of how it works have to be revamped in light of this incredible, revolutionary new technology.

DANIEL: Yeah. Yeah. I say all of this to kind of ensure to the people watching this who are concerned about this—this is one of the things I’m really worried about, and we’ve had a lot of discussions about it. There’s so many different ways in which we can be advocating for the right way to settle this copyright issue. Yeah. But it is going to require conversations outside of our industry as well, because we are fundamentally changing the way we think about copyright and ownership and IP, all these things.

JOE: I want to add just one more example. Because you bring up a really good point about how the movie industry works. What about YouTube? Currently, if you upload a video to YouTube, you’ve agreed to some kind of contract that lets Google, who owns YouTube and also owns Google DeepMind, their AI company, use your video to train Veo 3, their video generation product, or Nana Banana, their photo generation product, etc. And is that how that ought to work? I actually see YouTube as maybe one of the most fertile grounds for a potential good solution, because YouTube has a clear line to pay. They have all the different creators. And YouTube, actually, much more than any other platform, I think, strikes a pretty fair deal with their creators. They take the ad revenue, they split it in half. Half of it goes to YouTube, and half of it goes to all the different creators whose work is being monetized through their ad model. And half sounds pretty fair. So I feel like—and Google also is such a big company—they could potentially invent the new technology and the new systems necessary to be like, cool, if we use your YouTube video to train one of our AI models, and every time one of those AI models produces a piece of content that makes money, it can trace back to whose video was used and in what importance, how important any given video was in any given output. And we are going to assign some of that ad money to all the different people whose content was used to train this model. It’s not an easy thing to accomplish. But if anybody can accomplish it, maybe Google and YouTube can accomplish it. And I would love to see that happen.

DANIEL: Yeah. Yeah. There’s so much to say here. But I would love to answer this question of, what does ethical use even look like? Because, to me, one of the things that I repeat often—and I think I’m annoying about it, but I think you just got to be a little annoying sometimes—my fear is that the default path, this sort of path of least resistance that I’m talking about, is the one in which we solve the copyright problem. Because I believe that is going to happen. I believe that even if CCAI didn’t exist, a version of the copyright problem would be solved.

JOE: Yeah.

DANIEL: And it would be solved by the—yeah—the CEOs and the stakeholders and the people who actually care about this stuff because it’s gonna affect the bottom line. I believe that’s gonna happen. And then suddenly, everyone’s gonna be like, well, we did it. Ethical. We figured it out. And then a lot of that energy and leverage and power will suddenly dissipate. Which is why what I’m trying to do is—I’m trying to sneak in, and it’s not even sneaking. It’s like, full-throated, just like, this is—we have to be worrying about jobs at the same time, right? Jobs, guardrails against deepfakes and the misuse, and all these things, as well as, how do we make sure that in the face of industrialization of creativity that we’re protecting the thing that makes us most human, which is our imagination, our creativity, and our ability to tell stories at large? Which is, in some ways, an overreach. It’s too much of a scope. I admit, this is too much for us to be holding. But I fear that if we do not try to do it all at once, that the things that matter most to me are going to fall to the wayside. And so, protecting—when I say jobs, it’s about job protection, figuring out which jobs we can protect and how do we protect them, not only for the livelihood of the people who are working right now, but for the training and the continuing of our craft and the artisan traditions that we have developed over the last hundred years. Right? Protecting just that part of it. But then also, sadly, we have to admit that even without AI, our industry is shrinking, right? And jobs are going away, and jobs are going overseas. How—is there a version in which—actually, there’s two things I want to say about jobs here. First of all, if the AI companies are gleefully talking about how they are building this to replace the economic worth of workers, it’s on them to make sure that our workers are taken care of, right? We should be figuring out—with some sort of automation tax, some sort of way in which we can be building transition funds, retraining funds—how can we take the people most vulnerable within our industry and make sure that, no matter what, we can take care of them, whether that means protecting them and making sure that they stay, or finding ways in which we can retrain them or find other directions for them to go in? And this is, again, a conversation that every industry is going to have to have.

JOE: Everything you said, other than maybe one thing, which is that it’s on them. Because I don’t think they can—there’s—they’re for-profit businesses, and they—well, it’s not in their DNA.

DANIEL: So, it’s on us to force them to contribute.

JOE: Yes. We have to get the government—there has to be—exactly. We have to be fighting for this.

DANIEL: We have to be pushing for it. And they cannot just wipe their hands clean of it. Yeah. So, to me, that’s really important. And that, to me, is something I’m worried that’s going to fall to the wayside, because people are just like, well, this is what happens, right? It’s like, no, this is very different. And it’s happening at a speed and scale—it’s very different when it comes to the misuse of this technology for deepfakes. Again, I don’t believe anyone else is going to step in and fight for this. I believe that we have a little bit of power, and we should be pushing back. And every business person should be worried that no one knows what’s real anymore, right? No one knows what’s real. How is a society going to function if a society can’t function? How are you gonna do business? It just—to me, it feels like this is the kind of thing that should be a no-brainer. But I think we’re also paralyzed. No one really knows what to do. And then the last thing, again, is protecting our humanity. I just want to make sure that as we are using this technology, we have the wisdom to know when to use it and when to not use it. Because there are certain use cases of gen AI where I think, for the most part, 99% of the ways we would use it would be bad for society and bad for humanity. I look at what’s happening with all this automated, generated slop that is just flooding everything. And it’s gotten to a point where people don’t even recognize it as AI all the time. Like, 50% maybe. And by next year, it’s gonna be completely unrecognizable. There are certain use cases of this technology that I think we need to be saying no to, so we can protect, as a greater society, our communal—just our communal story, the myth that we collectively hold together when we tell stories with each other and communicate and dream together. All these things—we are what we eat, right? We are what we eat. And if the last 10 years has taught us anything, we are also what we see and what we read and what we hear. And I personally believe that—again, this is a much larger conversation—we need to be building new systems and new institutions and new ways to look at imagination and creativity as a muscle. Just like our government has all these programs for fitness and health, we need to be doing that for our mental and creative health as well, pushing back against the easy path.

JOE: Exactly.

DANIEL: Now do we push back against the atrophying of the mind? How do we push back against all these things? To me, if we can find a way to build metrics for all of these things and push back against the tech industry’s desire to release this technology without first addressing those things, then there might be a way in which some AI use will be ethical and potentially even beneficial.

JOE: There’s a part of this that is deeper than any transaction. And I love it when you go here, because I’ve been in rooms with you where we’re talking brass tacks with business people. And sometimes business people don’t want to talk about the philosophy. They don’t want to talk about the stuff that you can’t quantify in numbers. But that is a part of this coalition. It goes beyond any lawsuit or legislation or anything with a right angle. It’s something that I feel, and I know you feel, and I bet a lot of other people feel. It’s, as you said, part of what makes us human, part of what gives us a soul. And it gives us kind of the sanctity of what it means to be alive: storytelling. And I don’t mean storytelling like a movie or a show or a product or a commodity. And this is, by the way, one of the things I think Hollywood is just as guilty of as Silicon Valley, is turning this sacred human thing into a commodity, into purely a business. And Hollywood did that way before Silicon Valley ever did it. And I do feel like part of this coalition is just having some communion and sort of recognizing fellow humans who care about this, who also feel this, that this is about more than dollars and cents or views and likes or any business or anything like that. This is about something deeper, something that we all share as fellow people.

DANIEL: Yeah. What you’re getting at is, one of the benefits of AI right now is the fact that it’s forcing us to look in the mirror, right? What does it mean to be a human? Exactly. And what you’re saying about the film industry failing our audiences—I believe that in some ways, we’ve hit a point where you could say that about almost every industry, right? The education industry—is education failing a lot of kids? Yes, right now it is. Is the medical field just prioritizing profit over—exactly. And then you look at even religion. What have many religions within our country become? They’ve become businesses. Over and over again, people are realizing that our institutions are failing us or whatever. And the way that AI is going to supercharge and speed up all of those trajectories is forcing us to go back to first principles and ask ourselves, what does it mean to educate? Yeah. What does it mean to tell a story? What does it mean to heal someone or have a church? Or have a church? Oh yeah, exactly. What does it mean to have a spiritual community and spiritual personal walk with whatever you believe in? You know, it’s not just—and so much of what drives me when I’m working with people like you and the rest of the coalition are these questions. Because in my heart of hearts, if I can be really gooey for a little bit, I really dream of a world in which, by facing down these problems and these risks that our industry is facing, we will be forced to reckon with some of these questions. And some of us—I’m not saying all of us, no, not everyone’s going to want to do this—but some of us are going to really want to stand up and transform what it means to be a storyteller and become good stewards of the craft and the responsibility that it is to be a storyteller. I believe that—like I said, we are what we see, we are what we watch, we are what we—all these things. And stories do have a profound impact on the way that individuals and communities move through the world. I believe that now that it’s become so accessible through this technology—it’s become so accessible, anyone can technically tell stories—it’s really shown me, and hopefully shown a lot of other people, how much responsibility should be going into this. Like, what is our Hippocratic oath as storytellers? I—it sounds like it could be kind of flippant, or obviously it’s very different saving people’s physical lives versus telling stories. But I do believe that, me as a storyteller, I would like to ask that question for myself. What is my responsibility to my audience? What is my responsibility to my community and to society? What’s my responsibility to my crew? You know, I do believe that some in Hollywood are soul-searching. And I’m just really hoping that a lot of people like ourselves can be in those conversations to help point towards other options and other directions, other paths for what our industry could be and what our craft could be. Yeah. Well, that felt good.

JOE: Yeah. Get some of that stuff—we got three more hours, four more hours.

DANIEL: We could keep going. All right, so a question we get—because we have the four pillars, right? We’re talking about copyright, jobs, guardrails, and protecting humanity. A lot of people are wondering, why is environmental impact not something that is on that list?

JOE: Yeah. It’s certainly a huge deal. I mean, I’ve been paying attention to climate change since the ‘90s, or since I was a young kid able to pay attention to what was going on in the world. And certainly, the impact of building these data centers and just the massive amount of construction, everything that’s going into building up this AI industry, is having an impact on the environment. I’m a dad. I care very much that the climate stay healthy and safe for human beings moving into the future. But you do see some things online about, for example, how much water gets used every time you query a chatbot. That’s—I think that’s just factually wrong. So I do think that we should be clear-eyed about how we talk about AI and the environment. But that’s not to say it’s not important. I do think it’s incredibly important. So, I don’t know. I’ll turn that question to you. When we debated whether or not to include that in our four areas—the environment is one of the things I care about most in my life.

DANIEL: Honestly, I’ve become pretty avid—just someone who just loves ecology and loves nature and just someone who really wants to be protecting the earth. I will say that the environment is an issue. And even if it’s less than agriculture and beef and farming and that kind of stuff, it is still really important. And my fear is, like, around scaling. Like, as this scales up, it will potentially become worse. And so the thing that we need to be wary of is, as the Creators Coalition, with limited time, money, resources, and energy, how do we make sure that we are in our lane, dealing with the things that are our domain, our expertise? Yeah. Meanwhile, like I said, that fourth point of our strategy is connecting with other places. There are so many incredible organizations that have already started around advocating for and against AI use and the environmental impacts of AI use. And so there is a world in which, later down the line, we find a way to pull together some of these organizations to help us understand and figure out how this works within our industry. But in the short term, we need to focus on what we actually believe is our—yeah, our strength and what is our responsibility right now. That being said, eventually, as the Creators Coalition continues this conversation, it’s still on the table. It’s still on the table how we deal with environment and resource management and resource allocation, all these things, when it comes to AI use. Because I’m gonna say something a little controversial, maybe—or not controversial, but something that might need to be unpacked. In the vein of talking about how Hollywood isn’t a perfect system, Hollywood can be very wasteful. There’s a lot of bloat. There’s a lot of resources—we build up these amazing buildings and then just take them down and destroy them or burn them or, you know, whatever. There are specific use cases of gen AI that would actually be better for the environment than what we do, if I’m being very honest. And also, I do believe that we should be fighting for other ways to use gen AI. Because right now, everyone’s talking about generalized large language models, and those require a lot of resources and massive data centers. And I think it’s incredibly problematic. And right now, there’s a lot of communities around the country and around the world who are pushing back and winning—like, actually blocking data centers, which is amazing. We should continue doing that. I believe that the future is actually small language models and these sort of bespoke—there are ways in which you can create really task-specific, not generalized, models to do very specific things that are far less resource-intense, that can just get the job done in a way that—again, it’s very specific, very nuanced, but it’s worth saying that that is a possibility and something we should be considering when we talk about environment and AI. It cuts both ways in an interesting way. But yeah, to sum it up, we care deeply about the environment. We believe that this is something worth fighting for. We do not know if it’s within the current scope of our bandwidth and also within our expertise. That being said, we are in conversation with organizations who are concerned about the intersection of AI and environment. And those are always going to be the kinds of relationships and partnerships we’re going to be continuing to foster and grow and, hopefully, yeah, take that information to our industry. Okay.

JOE: So some people asked about how some of the people on our founders list own or invest or work on AI companies. Yeah. And asked, is that not a conflict of interest?

DANIEL: Yeah. Yeah. Look, it’s gonna get messy when you try to build a broad coalition, knowing that the more plurality of voices is going to only benefit you when you are trying to fill in blind spots, especially when it comes to something as nuanced and complicated as this. And so we actually felt like it was important to be working with people who are actively working with this technology and trying to build the technology, as long as those people share the same values. And that’s the thing that makes me feel really proud of the work that the coalition is doing. Everyone I work with, whether they are crew members or people who actually work with AI companies—these specific AI companies—these are the people who are actively pushing for the protections of the things that we care about. The reason why some of these people have even started companies was because they were worried about how the bigger tech companies and bigger AI labs were building and deploying this. And you’re talking about Natasha and Paul, right?

JOE: Yeah, Stereo. Yeah.

DANIEL: Stereo. And Natasha and Paul—one of the first few conversations I had with both of them was about how concerned they were with how this technology was being built and deployed. And so that is why they decided to build this smaller company. And they are trying their best in the vacuum. And now we’re trying to pull them into this bigger conversation to see if there’s a way in which we can actually show and prove to the rest of the tech industry that there are ways in which you can do this properly. And so right now, Stereo has built—they’ve built foundational models that are completely built off of licensed materials. There are some drawbacks because that makes the technology slower and maybe not as good. Personally, and I think Joe agrees with this, we’re talking about building some sort of standard of ethics and creativity—fairness and fairness around AI. Again, we haven’t even set the standard, and we don’t know how to build those things. But one way we can do that faster is if we’re working with the people who care deeply about the same things we do but also understand how this stuff works.

JOE: Yeah. So if you—and again, it’s part of the coalition having a real wide variety, yeah, of perspectives and experiences. I think that’s probably the best way to get to a bright future. It’s not to have purity tests and exclude anybody except the people who we perfectly, a hundred percent agree with. Yeah. But rather, it’s a coalition. Let’s build a really broad coalition on this stuff.

DANIEL: Yeah. Yeah. I mean, that goes back to one of the first things we said, which is we cannot allow this conversation to become polarized and broken up. And you used the term “purity test.” I feel like the last 10 years has shown us what happens when you close off all these doors. We lose power. We lose collective leverage when we aren’t willing to see that everything is going to be some version of compromise. Yeah. But how do you ensure that the foundational values are still going to be fought for and still protected? And any conversation I have with these people who work with AI companies always starts with, how can we make sure that the VFX artists and the animators and all these people are protected? And how do we ensure that these tools are actually used as tools, not as replacement? And that, to me, is—yeah, that, to me, feels like the reason why it feels important that those voices are included in the coalition as well.

JOE: Yeah. Well said. What can people do next right now?

DANIEL: Yeah. First up, if people want to learn more, follow along, be supportive, sign up on the website, add your name to the list, be part of this growing number. Because that can be huge power when we move into rooms and need to be talking. It’s really great to have these names to point to. And I know there’s a lot of names on that list that are big, renowned names.

JOE: You might see that list and be like, uh, my name doesn’t belong on that list. No, but we want the coalition to be a really broad and wide variety of people. Everybody’s name belongs on that list.

DANIEL: Yeah. So that’s first up. Second up is, we are a volunteer-run organization. Everyone who does anything is doing it for free. We are looking—right now we’re looking for an executive director, and we’re looking for funds through fundraising. And that’s gonna be great. But for right now, we are all volunteers, and we need people who are passionate and excited. And if you think you have a very specific skill set or a very specific point of view, feel free to reach out, and we will try to—we’re still working out our process, but we are trying to find ways to find more people to help, because there’s a lot to do. This is a many-tentacled creature that we’re trying to tackle. So there’s a lot to work on. And then the last thing is that we are looking for more clarity and more information. And so, because we want to be making informed decisions, we are going to be looking for opportunities to have discussions, to poll our community, to really get a picture, a temperature check of what people believe. What are you afraid of? What versions of AI are you excited about? All these things. So look out for those opportunities, because we would love to hear from people, your feelings. Because the more we hear, the more we learn, and I think the better our decisions can be. Yeah.

JOE: Yeah. Excellent. Thanks for doing this with you.

DANIEL: Yeah. Thank you. Thank you. Oh, one last thing. Last thing that you can be doing right now is just have conversations. Have conversations with your co-workers, with your family, with the people around you, the other creators in your community. Have hard conversations, honest conversations. Build safe spaces where you can be honest. Because I think the moment it becomes divisive, the moment it becomes a fight, that’s when we make the worst decisions, because that’s when we’re working from an emotional standpoint, not from a place where we can actually—yeah, work together to find proper solutions. Debate can be healthy. It doesn’t have to be a fight.

JOE: Yeah, exactly. And so having those conversations right now and developing within yourself the sort of language for how to talk about this and your own personal feelings on what is responsible, what is ethical, what standards there should be—if you know that, and you can communicate that, then we’re gonna have a better—we’re gonna have an easier job collectively doing that together, right?

DANIEL: All right.

JOE: Yeah, man. Man. There’s so much more to say. Thank you, Joe.

DANIEL: This is awesome.

JOE: Well, let’s—we’ll see all the comments.

DANIEL: Okay. We won’t do this again.

🔴

Discussion about this video

User's avatar

Ready for more?