What does “responsible AI governance” mean anyway?
Welcome to the second written post of “DC Decoded,” full transcript of the latest podcast episode included!
What did we talk about?
In our second episode, John Heflin Hopkins-Gillespie of Trustible explains the difficulties in defining AI and responsible AI governance, emphasizing the need to balance innovation with risk mitigation.
John and I explore US and international approaches to AI regulation, highlighting the push for innovation and competition, especially with China. John also points out the importance of DC as a hub for AI development because of its decision-making, political, and legal power.
We also touch on the trend of state-level AI regulation and discuss concerns about a lack of uniformity at the state level. John advocates for common-sense AI regulations that foster innovation while minimizing harm.
Full transcript
Below is a rush and unedited transcript of the episode. Please check against the audio/video recording.
Have a topic or know a person who would be good for the podcast? Send a message!
John Hopkins-Gillespie: DC really is the scene. It's under-appreciated, I think, in many regards. I think we think of tech and we automatically go to Silicon Valley. But, you know, I mean, there's some really great tech hotspots out there. I mean, of course, New York is, you know, another, you know, really burgeoning hot tech scene. You know, when you really think about what could or could not happen with the future of the technology, I mean, that really sits here.
Sam Li: Why does DC have a unique place in the responsible AI governance discussion? John Heflin Hopkins Gillespie, Director of Product and Policy at Trustible AI, joins the show to discuss that question and more. In this episode of DC Decoded, I, Sam Li, get to chat with John about what responsible AI governance really means, why the US is charting a more independent AI path compared to the rest of the world, and why DC should be considered a tech hotbed. Let's get going.
Sam Li: Welcome back to DC Decoded, a Gen AI podcast where we break down what happens when artificial intelligence meets the nation's capital. I'm your host, Sam Li, part of the Gen AI DC chapter. Today, we'll be talking with John Heflin Hopkins Gillespie. John is currently the Director of Policy and Product Council at Trustible AI, advising the company on AI regulatory developments and overseeing legal compliance issues for the Trustible platform. Previously, he worked on cybersecurity incident response as an associate at Alston and Byrd, and on privacy, data security, IP, and AI at Wilkinson Barker Knauer. John serves as co-president of the DC LGBTQ Plus Bar Association and the Executive Committee Freedom Fund Chair for the NAACP Alexandria Branch 7043. He was recognized by the Washington Business Journal as one of the 2024 LGBTQ plus business leaders and received his JD from Georgetown University and a Bachelor of Arts in Political Science from Marquette University. John, welcome to the show.
John Hopkins-Gillespie: Thanks, Sam. And I appreciate you getting through my name. I know it's quite a mouthful.
Sam Li: No worries at all. So, John, how do you define artificial intelligence or AI?
John Hopkins-Gillespie: Yeah. So, well, I'll start off by saying if I knew exactly how to define it, I'd probably be a millionaire because isn't that like the million or billion dollar question? So, you know, I think the best analogy that I can give on a definition is Supreme Court Justice Stewart. When asked to define obscenity in the context of pornography said, I know it when I see it. And funny enough, I was telling my husband, you know, that was the answer I was going to give because, you know, in the space, I think it resonates with a lot of people. And he very smartly said to me, but it's artificial intelligence. You don't see it. And I was like, well, yes, that's true. You don't literally see the machines at work. It's more of a figurative analogy. But I think the real meaning behind the I know it when I see it, and I think the phrase is apt when describing it, because you really talk to different personas and Industry, academia, policymakers, you know, if you're talking to different personas, you really are going to get a wide range of answers of what exactly AI is. And so I think we've kind of coalesced around using the term interoperably AI with algorithms and predictive systems and automated decision-making systems and machine learning. And you know, it's, there's an umbrella of different functions and features and tools that fall within the scope of, you know, artificial intelligence. But I think if we're really trying to understand what it is, we have to be really precise in understanding, you know, what is the task that we're trying to accomplish? And going from there, we can figure out what exactly we should be using in terms of terminology. So it's a question that I think everyone is struggling with. Again, I think you'll get different answers depending on who you ask. I think if lawmakers understood how to define AI, we'd be in a very different position. There is a federal definition of AI. There's a definition of artificial intelligence for the EU AI Act. NIST, you know, uses a definition of AI that kind of coalesced between the federal definition. It uses the federal definition, but it's also acknowledged other definitions from standards bodies and other nonprofit and intergovernmental organizations. But yeah, you know, I think... I think if we could answer that question, we could end this podcast now and go enjoy retirement.
Sam Li: Right. So there's a lot to talk about is what I'm hearing. Well, let's niche down a little bit specifically with what you've been working on for the past several years now. How would you define responsible AI governance? How would you define that term?
John Hopkins-Gillespie: So here's how I think about it, right? And I've talked to a few people who've made this really apt analogy about, you know, when cars came into society and they became widespread. And when you think about it from, you know, any kind of product or service development, for me, it's thinking through what are the risks, what are the benefits, and how do we do our best to minimize the risks while maximizing the benefits? So when we're thinking about responsible AI, we need to think about how is the system being applied, right? Is it being applied as a product? Is it being applied as a service? How are people interacting with it? And then tease out what exactly are the risks that come from the intended use or the intended purpose of the system? And then, you know, be intentional about the benefits. I think, you know, we oftentimes get lost in the pitfalls of how these systems can detrimentally impact us. And we don't think enough about benefits. You know, and at Trustible, we try to, you know, we're trying to do, to work on, you know, weighing the risks of systems that enterprises are using and deploying with the benefits and helping to kind of realize you know in both verbiage and dollar amounts what that looks like and when you're thinking about responsibility I development it really is to me you know how can we minimize the risks and maximize the benefits.
Sam Li: Right. Minimizing risks, maximizing benefits. Were these things you were already thinking about when you were a poli-sci major at Marquette? Were these things coming up during law school? When did you personally start feeling like this was something maybe worth pursuing as a career?
John Hopkins-Gillespie: So I will tell you, so I was in undergrad from 08 to 2012. And, you know, around around the time I graduated about 21 or 22, I struggled with understanding how Google Docs and Google Forms worked. So it's really kind of a sweet poetic justice moment that my career post law school is in emerging technology. But, you know, it really starts with when I graduated, I, you know, I worked in politics. I worked two campaigns. I worked more than two campaigns, but. I worked two big campaigns, and then I went and worked for the state government. I worked for Governor Terry McAuliffe when he was in office from 2014 to 2018. I was his personal aid body guy for the D.C. folks that are listening to understand what that means. And one of his big initiatives when he was involved with the National Governors Association was cybersecurity and how do we build modern, resilient networks and infrastructure to secure taxpayer data? And, you know, it was really shocking to me, having not really explored the topic much, to learn how antiquated and outdated these infrastructures and networks were. But how vital they were to people's lives. You know, you're talking about people who are receiving benefits or healthcare or need government services. And these systems are just outdated and ripe for exploitation. And, you know, so his initiative was trying to modernize the Virginia state networks, but also kind of rallying the troops when he was in charge of the National Governors Association to get other governors on board with, you know, modernizing their networks. And it really got me thinking about, you know, at the time I was trying to figure out what to do post-administration. Virginia governors get one term. And I, you know, decided at that point to go to law school. But I was thinking about, you know, what would a good outcome be for me in law school and post-law school? And so that was really where I got bitten by the bug when it came to emerging technologies and thinking about, you know, the forefront of these issues, whether it's cyber, privacy, biometrics, AI. You know, these are all the things of the future that are going to make life easier, more equitable, more accessible for people. But it comes with an enormous amount of risk. And for me, thinking about what would be a worthwhile legal career, that was what propelled me through law school is to focus on emerging technology, You know, Georgetown has a wonderful curriculum. It's robust. So I gobbled up as many tech classes as I could, you know, worked my schedule. I worked during law school, so I worked my schedule. So if there was a class I really wanted to take, I could take it. yeah and that's that was really what would push me in the direction um so when i came out of law school i knew that wherever i was gonna land firm non-profit animal mineral vegetable right wherever i was landing this was the space i wanted to be in and so that just really guided my career right you know before law school until now.
Sam Li: Right and for those who might not know what is Trustible ai the the place you work at now as director of policy and product council what what is Trustible ai doing.
John Hopkins-Gillespie: So Trustible is a software for a service platform. And, you know, what I love about our startup, it's a startup. And what I love about our mission is that, you know, we're helping companies operationalize what can sometimes be the amorphous concept of AI governance. You know, I think the term has become one of those buzz terms over the past year, six months or so, where, you know, now there's a pusher, we've got to have AI governance. You know, but almost similar to what is AI, it's an apt question to ask, what is AI governance? And so, you know, our job at Trustible is to help automate and operationalize various frameworks standards you know whether it's regulatory like the EU AI act whether it's guiding frameworks like NIST you know whether it's policy positions or toolkits that are out there like we see in singapore right we're taking all of these different touch points and trying to put them into a platform so when you know our customers our prospective customers are trying to wrap their heads around how do we understand where AI is being used in our organization? How do we understand the risks? How do we understand the benefits? How do we understand our corporate governance structures around AI? Trustible is there to fill those gaps and help accelerate AI adoption. You know, what we're finding with our customers is when there are robust systems, in place, when you have components that make up your AI governance, depending on your size and scale, what we're finding is companies, orgs, they're more willing to adopt and deploy AI technologies internally, externally, because there's trust that's being built into their ecosystems around the tools. Our CEO calls us the turbo tax of AI governance, which I think is great because I think it really kind of helps analogize what exactly we do. And it is, we look at it through the lens of responsible AI. So I love that you asked, what do you think of when you think of responsible AI? Because it's not just thinking through operationalizing AI governance. It's thinking about how can we be responsible with the design, the development, the deployment the retirement of tools across an enterprise right that that makes a lot of sense and for our listeners who might not know can you dispel out what NIST stands for yeah so NIST is the national institute of standards and technology it sits under the department of commerce it is not a regulatory agency um a policy setting agency that while it does have standards in their name, they don't actually prescribe standards, but they do set out very, very good, very detailed policy guidance on emerging technology issues across so many different topics from cybersecurity to privacy to AI.
Sam Li: Right okay good good to know for anyone anybody who might not know that on the topic of frameworks as somebody who works works in this field do you see frameworks trending specifically around ai governance towards more so coalescing among a few frameworks that applied to like a wide range of companies and organizations? Or is that the opposite with how fast AI is evolving and changing our framework splintering into more and more case by case basis? Where are we right now in terms of the direction of these frameworks? Is it more or less?
John Hopkins-Gillespie: I'm going to give you my favorite attorney answer. It depends.
Sam Li: Sure.
John Hopkins-Gillespie: And it's so true. So, you know, there are a lot of players in this space. And so when you think about what entity you're dealing with, That's going to dictate, you know, where the framework is going. So I'll give you some examples. You know, the U.S., of course, is the most exceptional nation in the history of nations, right? And so we're looking at, you know, what does oversight in AI mean through the lens of, you know, the American experience? And so, you know, we want to be in the driver's seat on these issues. And so, you know, we will consider, you know, whatever administration is in place, you know, we'll look to potentially align with how our international partners are looking. But we also have this spin to it where we want to be the ones kind of dictating the tone and the cadence of what that looks like. So Congress, specifically the Senate, but the House also did some work on this over the last couple of years, has really dug into what does a responsible AI regulatory framework potentially look like. And so there's somewhat of an agreement between the House and the Senate. And if you really want to talk about trying to get people on the same page, get the House and the Senate on the same page. And that's a challenge in and of itself. There's a coalescing around the idea, at least in the U.S., that when it comes to regulating AR, thinking about AI regulations, there's existing law and then there's gaps that can be filled by new law. And so that's how Congress has been thinking about a potential framework. when we look at NIST, they look at AI through the lens of risk. And so, you know, NIST has the AI risk management framework. And so they're looking at how are organizations able to have programs, policies, procedures in place that address risk with AI systems, right? And it's just risk focus. And AI governance is, you know, more than just thinking about it through risk. There's a lot of different things. It's risk kind of runs through it all but um you know they're really focused on the high risk aspect of it um you know you're looking when you look at the eu they're looking at it from a consumer protection consumer transparency tech regulatory lens it's kind of a conglomeration of a few different concepts there and you know i i think some of the the thought leadership around the EU AI act has been you know europe has tried to or the eu has tried to not make some of the same mistakes that we saw with with GDPR the um the general data protection regulation that governs data privacy in the eu i think you know there's been thought that that's what they've tried to do and yet There's some pretty heavy criticism about how far the act goes, how prescriptive it is. It really does look at AI through the lens of product safety, consumer safety, tech regulation. So you have a few different models out there about what does reigning in AI look like? What does oversight look like? There's some alignment. There's, you know, there's some concerted efforts to be aligned, but everyone's kind of, you know, each of the actors are kind of putting their own spin on what that looks like.
Sam Li: Right. So let's stay on DC for a second. What AI regulations or discussions happening in Congress or other political bodies should businesses be paying attention to right now? Like what, what is going on?
John Hopkins-Gillespie: You know, there has been a pretty big pivot since January 20th on what that looks like. And so I think, you know, under the Biden administration, there was, I think, more alertness to what could potentially come down the pipe in terms of any federal action on AI. Now, there are plenty of people who will say, if you can tell me about AI regs that are coming out of Congress, I've got a bridge to sell you in Brooklyn. Because there's not a lot of faith in sometimes what our Congress folks can get across the finish line. But I do think that had we had a democratic administration, there probably would have been some light touch rules that had come out to kind of clarify certain laws with maybe some stopgap measures in between to, you know, cover things that weren't necessarily covered by existing federal law. That has completely changed. And so I think what we're seeing now in DC at the federal level is a push for more experimentation, more innovation, more proactive ai arms race development right because that's really you know my opinion how the trump administration is looking at the realm of ai it is we are in a global race with economic competitors from china from india you know potentially now from europe and we need to be number one in that race and so so much as we can the regulators at bay and let tech go forward and develop and push the boundaries of the technology. I think that's what they're doing. And because Congress is, you know, both houses are controlled by Republicans. I think that mantra is ringing true. And so, you know, on the federal level, I think that probably less to be concerned about. I don't have a crystal ball, for starters, and I have been known to be wrong in the past, so you never know Congress could surprise us. I think, honestly, the thing that we should be keeping an eye on, and this is, I think, apt and a little bit closely at home, is what's happening at the state level. And right now, like in Virginia, there's a Colorado-style bill that's just passed the General Assembly. I think it just passed out of the general assembly last week and it's on governor yunkin's desk now and not a lot of foresight into whether or not he signs it but that's really where we should be looking right now because states are moving forward on these issues you know colorado's passed their bill connecticut tried to pass a very similar style bill last year and i think they're going to try to do it again there's a consumer protection bill up in the New York state legislature. There's sector-specific pieces of legislation peppered across some states. I think New Mexico and Washington have some pretty big AI bills that they're looking at in their state legislature. So, you know, they say all politics is local. And I think this is a really great example of how, you know, we're seeing... a message from the federal level about what innovation looks like. And I think you're seeing at the state level, okay, well, we're going to be labs of democracy and experiment with some of these rules and see what works and what doesn't work. completely forgot this but texas is also trying to stake their claim on ai rolls and has a bill in their legislature that it's it's a republican sponsored bill i'd be shocked um if it doesn't make it across the finish line because i think they're they're trying to be a leader in the space as well and you know they're trying to attract tech companies so you know i think they're thinking about this too but yeah the state level obviously i know you know everybody in dc wants to know what's going on in dc but really a lot of the action right now in the u.s is percolating at the state level.
Sam Li: And what do you make of these state-level bills? Like, on average, what do you think these bills are getting right versus what could be concerning if implemented?
John Hopkins-Gillespie: You know, so my my chief concern is similar to what we've seen with cybersecurity and privacy is that you once again get a patchwork of legislation laws where there's just enough difference between states. That you have to have a completely separate process to deal with a component of a state law. Right. And so what works in California, you know, maybe 70 percent of it will work in Virginia. But there's like that 25 percent that's specific of Virginia that you got to do, or there's the 25% that's specific to the state of New York. And you've got to, I'm sorry, the Commonwealth of Virginia, let me correct myself, the Commonwealth of Virginia, but there's a 25% that's unique to the state of New York and you've got to figure that piece out. And so really that's, you know, that's where you start getting into some issues when it comes to what are we trying to do with the rules and Right. Are we trying to be responsible with the ecosystem and help develop trust and safety for these products? Or is this becoming a check the box regulatory exercise that every state wants to put their unique spin on? And so that's, you know, that's really the pitfall. It's, you know, it's not which state's right, which state's wrong. It's really thinking about, is this the right approach to where we have, you know, 50 states And, you know, having seen it with, you know, incident response work, I'll say it's, it's messy. And, you know, when you have a breach, and you're talking about a multi jurisdictional breach with multiple state laws involved. I think. The people who are affected by the bad thing, in this case, you know, a malicious actor doing something to someone's system, but, you know, we take it to the AI context. If something happens with the AI system, right, it's really the consumers and, you know, the public that gets left behind in these things because, you know, I mean, i'm sure you've gotten notices in the mail from whomever that something's happened there's been an incident and you get free credit monitoring but people have kind of become numb to that and so you know when we're thinking about Who is the intended audience? What do we want as outcomes? And how do we want this to work in a way that actually builds trust in the ecosystem? We really need to take a step back and think about is a 50 state or a 25 state or however many states decide to get into this game when it comes to AI. Is that the right approach? And is it really doing a service to the intended audience?
Sam Li: Right i want to close out this part of the discussion with a quick quote followed by a question so as i was preparing for this episode i found this stephen hawking quote and with like i think all famous figure quotes it's like 50 chance it was actually them who said it but i think i think he wrote this one down in an article so i feel more confident saying it was the stephen hawking but Before he passed, this was the mid-2010s, he stated that artificial intelligence is likely to be either the best or worst thing to happen to humanity. And regulations, this is now outside the quote, AI regulations probably will just continue to be part of that conversation. So in your opinion, what is missing in today's AI regulations to ensure that it's more likely to be one of the better things to happen to humanity rather than the opposite?
John Hopkins-Gillespie: Yeah. So let me kind of take this at a higher level and I'll tell you, it was, it was really one of the reasons that I wanted to join Trustible, you know, specifically, you know, when I was going through the interview process. So, you know, Trustible brands itself is a company that looks, looks at AI through the lens of AI pragmatism. And, you know, and I, and I completely agree with that, that lens of thinking, right. Where, There's obviously extremes on both sides where there's AI dystopia and there's AI utopia. But really when we think about how the technology should be used and how it should be deployed and how it should be overseen, it really is a balance between unlocking innovation that can help people. This is personal for me in thinking that AI can really be a tool that unlocks potential for underrepresented communities. But at the same time, you know, with anything, right? Like, you know, I mentioned, you know, with cars, right? There needs to be common sense guardrails that are set up. So that way, you know, the tools, the technologies, the systems are, you know, being put out into the public in a way that doesn't harm people. Or if there is harm, it can be minimized or give people a recourse, you know, to fix you know, by these systems. And so, you know, when I think about what rules and legislation can look like, the principle should really be what is common sense, right? When we get all of the stakeholders in the room when we get industry in the room we get academia in the room and we have the ear of policymakers how can we strike a good balance between good ideas because there's a lot of good ideas you know among these different stakeholder groups there's also a lot of bad ideas among these stakeholder groups and so you know when we get into rooms and we have these conversations You know, how can we kind of go for the Goldilocks of making sure that companies and, you know, I'm not really talking about the big players. I'm talking about, you know, a creating an ecosystem where the entry barriers are low enough that if you, you know, you, Sam, have an idea for a tool. that can improve people's credit scores, right? And, you know, help them do that or have a healthcare idea, right? You have the potential to get into the ecosystem without having to worry about cumbersome rules and regs and paperwork and filing. But at the same time, there's stability in the rules that exist. So when you enter the field, you know, okay, my hands aren't going to be tied behind my back. But there's just some common sense things that are out there that I have to do to make sure that the idea that I have that could be the next big thing, you know, isn't going to have, you know, rippling negative consequences for, you know, society, for certain communities. So, you know, that's how I think about it is, you know, it's not what's good, what's bad. It's what works, right? What trying to solve and then figure out what are some common sense rules that we can have to make that outcome possible.
Sam Li: The last area I want to touch on is the more international perspective, especially as you bring up frameworks and stuff that the EU is doing, for example. So what is America's current relationship with the rest of the world on AI right now? Is it different from how we approach other more international issues like climate issues or defense spending, for example? Where are we at with the rest of the world on AI?
John Hopkins-Gillespie: If this was a Facebook status, it's complicated would probably be the best way to describe it. So, you know, nothing exists in a vacuum. And so just like all of our other geopolitical priorities, you know, AI is influenced by a number of different things. And so... On one hand, you know, the US wants to be the leader, right? Like we want the technology made here. We want the chips made here. We want the talent to be here. In the same breath, we, you know, under the Trump administration are trying to chart a unique American path that, you know, if maybe the rest of the world has agreed on a particular topic, we are not necessarily feeling beholden to the rest of the world and respect to signing on to it. And, you know, I'll give you an example, um, you know, at the Paris, uh, AI action summit a couple of weeks ago, there was a joint declaration that was signed by every country that was participating except the U S and the UK. And there were some language issues around it and, you know, what was included and what was not included. And, you know, I think in maybe a different lifetime or a different decade, the idea that America would walk away from something as consequential as a joint statement that the rest of the world, including China, who, you know, is an economic competitor in this AI arms race, right, is signed on to, the idea that the U.S. walking away from that would be slightly unfathomable, right? And so... I think there is a certain level of comfortability to do things in a way that isn't necessarily comporting with how the rest of the world is viewing the issue. And that being said, you know, the mantra of innovation is starting to really ripple around the globe. And so what you didn't see in Paris is probably just as loud as what you saw. And what you didn't see in Paris is a focus on the ice safety. And that is at odds with where we were in the last global summit when the UK hosted it. And it was literally called the UK AI Safety Summit. And Paris was the AI Action Summit. And so France, Germany, Italy have voice concerns over AI innovation in the EU for a number of years now, particularly around the negotiations with the EU AI Act. And it would appear that those concerns are starting to win out when it comes to how the EU is thinking about you know, AI development, AI innovation, tech companies, because you're starting to see a pivot toward the same innovation focus that's happening here in the US. You know, France wants to be a leader in this. And so, you know, they want to be a player on the global stage when it comes to standards and the technology. You know, they have minstrel, but I think they want more. And so, you know, they are putting up I forgot the exact amount of euro in AI investment. They're putting it together with, you know, a public-private partnership to invest more. But I mean, you know, the EU writ large is also investing, I believe, like a hundred million euro or something to that tune in AI as well. So, you know, so much as America is kind of charting its own course, it is also kind of leading a pack in some way. And that is really pivoting toward AI innovation. tech innovation, you know, deprioritizing some of the safety aspects of AI. And the next summit is, you know, I think it's expected to be in India. I believe they're bidding for the next AI summit. And India has said they want to be a leader on this, that, you know, they want to be an innovation hub as well you know they want to you know train and retain talent um and i would be really shocked if there's much emphasis on ai safety in the next you know in the next summit um especially if india's you know the moody government is is been you know very forthright on where they see themselves in this paradigm of ai development and you know the ai ecosystem from a policy perspective so yeah i it's It's complicated. It really is. And, you know, what kind of looms over all of this, and I know I've been on a roll with this, so I'll end with this. You know, what looms over all of this, of course, is the global competition between China and the U.S., right? You know, almost everything can be kind of viewed through that lens. It's, you know, the biggest... relationship on the planet um you think about the billions that go back and forth between these two economies when it comes to to trade and you know whatever the us is doing china's keeping a close eye on whatever china's doing the us is keeping a close eye on and you know we saw what happened um when deep seek was released right and in the ripples that it had through you know the us economy and so you know all the while that you know we're thinking about this from like the us eu paradigm you know we also have to think about it from what does this look like through you know the us china paradigm and how do those two how do these two countries kind of shape the debate and where they're shaping the debate because you know china's china is playing in the global south and that is you know just historically a part of the world that we've been super great with when it comes to investment in the way that the chinese government is so you know there's yeah this could be a whole other podcast that we could talk about breaking down these these different these different priorities across you know different international factions but yeah i'll leave it i will leave it at that with the thought bubble around the you know us china relationship.
Sam Li: Yeah and that's what happens when you talk about increasingly it touches on every other important issue in a way that i think few other topics do last question for you what do you think makes the developments around ai in dc unique compared to other places i guess with our conversation both other places in the US but also around the world?
John Hopkins-Gillespie: Yeah so you know when we think about the DC ecosystem, right? This is where decisions are being made, right? You know, or lack of decision, not every, not everything has to have an end, right? Like Congress not doing something is just as impactful as Congress doing something. And at the end of the day, this is, this is where the people who make decisions are. And so, you know, I, you know, I think back to the West wing and it's, you know, talking about decisions are made by people who are in the room. And, you know, this is where the decisions are made. So you want to be in the room, right? And so it's really in everyone's interest in the tech ecosystem and the AI ecosystem, regardless of size, scale, purpose, you know, big tech company, Tech nonprofit, you know, civil rights advocacy groups, right? Like everyone has a vested interest to be in the room helping to shape decisions or making the case for not doing anything at all. And so that's what I think makes DC unique. You know, we've got great educational institutions, you know, all around the DMV area. The center of political power is here. It's also the lawyer capital of the world. So, I mean, when you're thinking about how AI can be impacted and how development can be impacted, that starts with, do we have rules in place? Do we have agencies that are going to enforce existing rules? Are we going to get guidance on how these rules apply? don't apply are there going to be court cases being filed and you know the dc circuit right like these are all things that happen here so you know there's a reason why all the big tech companies have outlets here you know there's a reason why the the tech startup scene is blossoming in dc you know people understand the value of being here with all the talent with the access to you know political power that's here
DC really is the scene. It's under-appreciated, I think, in many regards. I think we think of tech and we automatically go to Silicon Valley. But, you know, I mean, there's some really great tech hotspots out there. I mean, of course, New York is, you know, another, you know, really burgeoning hot tech scene. You know, when you really think about what could or could not happen with the future of the technology, I mean, that really sits here. Because you have a major incident, lawmakers are going to react, right? They have constituents that may be impacted by these, you know, negative consequences. And so, you know, when bad things happen, lawmakers tend to sit up at attention and start doing something, which that's, again, that's a debate for another day is whether or not that should be the the posture, but it tends to happen. Right. And so, you know, something bad happens where there's a major AI incident, right? Like that will grab the attention of these folks. And if you're not here and you're not in the room and you're not making your case, then your voice isn't going to be heard.
Right. So that's, you know, for me, looking at why DC is important to, in the tech ecosystem, you know, that's what I'm thinking about. I'm thinking about, you know, the rules, the decision makers, you know, the court cases, like that's here.
Sam Li: Right. Right. Well, John, that was great. Thank you again for your time.
John Hopkins-Gillespie: Thanks for having me.