Digital politics in the age of AI

Podcast
February 27, 2025

Picture of the Big Thinking Podcast microphone with a headset

Description | About the guest | Fenwick Mckelvey in the news | Transcript | Follow us 

 

Description

In this episode, we’re exploring the ever-growing impact of artificial intelligence on digital politics and media. From shaping political campaigns to influencing public discourse, AI is transforming the way we engage with politics.

Karine Morin is joined by Fenwick Mckelvey, Associate Professor in Information and Communication Technology Policy in the Department of Communication Studies at Concordia University, to break down the risks, rewards, and ethical challenges surrounding AI in the digital realm.

About the guest 

Headshot of Fenwick MckelveyFenwick Mckelvey is an Associate Professor in Information and Communication Technology Policy in the Department of Communication Studies at Concordia University. 

Professor Mckelvey is also the co*director of the Applied AI Institute which seeks to enable real-world applications of AI and to consider how AI can improve upon today's status quo. 

He also manages Machine Agencies, an experiment between human and machine intelligence. 

Fenwick Mckelvey is the author of Internet Daemons: Digital Communications Possessed, winner of the 2019 Gertrude J. Robinson Book Prize.

His research focuses on digital politics and policy, communications, and AI.

 

Fenwick Mckelvey in the news

  • The federal government’s proposed AI legislation misses the mark on protecting Canadians – The Conversation
  • News coverage of artificial intelligence reflects business and government hype — not critical voices – The Conversation
  • Wait – is ChatGPT Even Legal? – The Walrus
  • Freezing out: Legacy media's shaping of AI as a cold controversy – SageJournals
  • Political Bots: Disrupting Canada’s Democracy – CJC Policy Portal 

[00:00:06] Karine Morin:  Welcome to the Big Thinking Podcast, where we explore today’s biggest topics with Canada’s leading voices. I’m Karine Morin, and I am the President and CEO of the Federation for the Humanities and Social Science.

[00:00:14] In this episode, we’re exploring the ever-growing impact of artificial intelligence on digital politics and media. From shaping political campaigns to influencing public discourse, AI is transforming the way we engage with politics.

[00:00:32] Joining me today to break down the risks, rewards, and ethical challenges surrounding AI in the digital realm, is Fenwick Mckelvey, Associate Professor in Information and Communication Technology Policy in the Department of Communication Studies at Concordia University.

[00:00:54] Fenwick McKelvey, it's wonderful to be having this conversation with you today, your field of expertise is communication studies, which is quite broad, and you have taken more specifically an interest in artificial intelligence, can you share how that came about?

[00:01:10] Fenwick Mckelvey: Certainly, Karine, and it's really wonderful to be here. So my background has been traditionally, and what we talk about is communication: politics and policy. So I've been really interested in the rise of social media, and how that changed politics long, long ago, when we used to talk about blogs, and how the internet was going to change politics, and look where we're at now.

[00:01:31] But, one of the parts that always really struck me was an interest in the infrastructure and how the internet worked itself, what we talked about when we're thinking about the platform, what actually did that mean? What was it made of? And so I really became interested in trying to make sense of the technology that allows our communication systems to work and how those work.

[00:01:53] And that became a bigger and bigger theme for me. When I was looking, especially at a lot of debates we had in the early 2000s around network neutrality, or whether the internet was going to prioritize certain forms of content over others. And that became kind of a real interest of like, what was this happening? How was this possible?

[00:02:13] And in that, I started thinking really in depth about the internet itself, and how the internet was changing and becoming more intelligent. And that became the genesis of my first book, Internet Deamons, which is kind of a history, both of the rise of the internet as something we might call an artificial intelligence and really kind of tracing the link between how the internet works, and how certain forms of artificial intelligence developed, as well as the early efforts to make the internet more intelligent, embed more forms of control, forms of smart decision making, and these kind of everyday decisions.

[00:02:50] And that became really interesting to me as a policy problem because we saw the shift where instead of humans making decisions or judgments, we saw this turn towards algorithms, and we saw this turn towards now artificial intelligence. And that to me became really captivating because we are dealing with this society that demands a certain degree of scale, this ability of operating on orders of magnitude, and the only way to do that is in these automated systems.

[00:03:18] And so I became very interested in how artificial intelligence and algorithms are increasingly forms of policy tools and forms of governance that affect people's everyday lives. And so people complaining about, you know, why their content appears or not appears on social media. How their TikTok feed is affecting what they see, whether they're getting a bunch of junk content.

[00:03:37] All of a sudden, that really became this kind of driving interest, where we see across all media technologies, this demand and expectation to fix social problems with these technical solutions, which have moved from algorithms to artificial intelligence. And that's where I've gotten to in today, where I've been very interested in this kind of link between AI and media regulation.

[00:03:59] Karine Morin: Thanks for that trajectory a little bit of your interest. And so you also just mentioned there briefly governance, certainly policy, and I know that your work, therefore, has touched on government decision making and also politics.  

[00:04:12] But let's start with, with government and what you have been seeing in terms of, the use of AI by government and some preoccupations you seem to have with how this is starting to take shape and how perhaps, we're not seeing as much as we should as to how this is taking shape. What are your thoughts on the use of AI by our government at this point in time?

[00:04:33] Fenwick Mckelvey: Well, one thing I want to say is that a driving part of what I think of my research is, is the Canadian Communications Policy Scholar. Being a scholar is really trying to think about how, not only do we study these systems, but how we make change with them. That's been, I think, really central, and I really think a strong part of what excites me about the field and being a Canadian communications studies researcher.  

[00:04:55] And that's been something that's also kind of carried on with a belief that what we're doing is trying to protect the public interest use of these technologies, and trying to think about how a new technology like artificial intelligence has detriments and benefits.

[00:05:08] And part of it to me is really seeing a gap between all the hype and we're living through a moment where that hype may have real financial consequences with deep seek and maybe we've talked or believe too much in the hype about AI, and trying to make sure that that hype doesn't have us rush to integrate certain tech solutions.

[00:05:30] And what we've been thinking about is one how to ensure that the government itself is being responsible in how it's integrating artificial intelligence into its systems. And so one part of this is looking at what are the ways the government's collecting, procuring, and evaluating AI systems, and where is it applying it?

[00:05:50] You know, is it doing it in high risk or low risk? There's just a new report from the treasury board that was really naming “No go zones” and ways where we shouldn't be applying it, which I think is a really important part of this conversation. And the second is trying to make sense of how the public can participate.

[00:06:06] And what are the ways that AI, more broadly, really is a democratic challenge about how to take a very emergent technology and create the mechanisms that everyday people can both understand it and feel like they can participate in the policy process, and that's a second and a big part of my concern, is that our consultations and our way that we've been dealing with and developing AI policy has been fairly exclusionary.

[00:06:32] And that's some of the research we've done has really been that the processes that I think really are legitimating this technology have moved ahead of the ways that we're allowing this technology to have public oversight and ensure that there's public interest values embedded in that technology.

[00:06:47] Karine Morin: So speaking of the public and how perhaps it started to understand, how AI is evolving, you've been particularly concerned by the role of legacy media. Can you explain what is your concern there as to how it has been portraying or how it has been learning about AI, and how therefore it's been portrayed it for the public to understand it through that particular lens that has been coming through from legacy media?

[00:07:14] Fenwick Mckelvey: Yeah, I've been really lucky to be part of a SSHRC funded research area called Shaping AI, which is a four-country collaboration: Canada, Germany, France, and the United Kingdom. And what we were looking at in this four-country comparison was how does media, as one part of the social shaping of AI, affect our expectations, how we think, how the public understands AI.

[00:07:39] And really through that, we had this challenge because often, and this is a bit technical, but often what we look at in this type of work in science and technology studies is, is looking at a controversy or something that's controversial. And in that we face this challenge where AI has arrived relatively uncontroversially.

[00:07:58] If people think that it's going to be an incredible boom for our economy, that it's coming about as something that's really for Canada's future, we're seeing a lot of investment in digital 4.0 industrial strategy.  

[00:08:13] And to me, it was really interesting, we faced this challenge where if we're looking at, you know, big newspapers, what we kind of, it's the kind of the legacy media or the media that's really had this very traditional role, which we could discuss about, you know, where it fits in this kind of wider media landscape, but looking at these spaces and feeling like, oh, we're not actually seeing a big debate about this, we're not seeing this kind of way AI is kind of raising a lot of controversy around this.  

[00:08:37] And very often, to give you a clear example, would be the Clearview AI controversy, which was this concern about police forces being able to use data scraped off the Internet to identify photos and suspects potentially - which has a lot of privacy implications - and that only became an issue because of the New York Times reporting. So, we're kind of like, why is it that these stories aren't becoming more part or the Canada's not breaking these stories that these are not becoming more part of our conversation.

[00:09:06] And so, media - who's increasingly oriented around business desks and trying to break business news - and scientists - who are increasingly expected to function as entrepreneurs or promoters of the technology - work together to what we say kind of freeze out any of this controversy to cool it down and make it something acceptable.  

[00:09:25] And I think that that's a dynamic that we really want to call into question - not to criticize any of the kind of individual players - but more just the consequences of this collaboration where AI is something already kind of primed and arrived at as something that's beneficial, and the media is not set up and non-incentivized in such a way as to call into question some of the downsides of the risks of that technology.

[00:09:48] Karine Morin: I think you're quite right that there's a very celebratory tone to Canadians’ achievements in AI and some of the leading figures being Canadian and their recognition and all of that.

[00:10:00] I also want to turn to politics, and, starting with a piece that you had published back in 2019, you started looking at AI in shaping digital politics, including the rise of political bots. So, describe a little bit that concept of digital politics and what was the impact of bots appearing into that space?

[00:10:20] Fenwick Mckelvey: Well, this is a fantastic collaboration I did with my partner at the University of Ottawa, Elizabeth Dubois, a close collaborator. And this bots project came up where you're thinking about what are these types of programs that are running in social media on what was then Twitter on Facebook that are posting content that are interacting with people and now we'd almost call them AI agents, but at the time, the term was a bot.

[00:10:49] And that was trying to make sense of how our public spaces include these non-human actors, and what are the consequences of that? And so, our project then was looking at how are bots embedded in part of social media platforms.  

[00:11:07] And we wanted to in some ways destigmatize concerns about bots being just overly negative and recognize that there's beneficial bots or bots that are helping journalists, as well as Wikipedia manage content as well as more problematic bots, which I think are more popular today that are posting content, promoting content, being used to kind of manipulate how we think of public opinion on social media platforms.

[00:11:33] And the bot is something that is tricky to track. And certainly it raises these questions of what's authentic and inauthentic human behavior, which is really a kind of fraught question and not one that's always easy to answer, but it really does speak to the moment we're in now where you have a growing role of virtual influencers or artificial intelligence that are posting content, interacting with fans on Instagram.

[00:12:00] And so a lot of companies are investing in this idea that we no longer need human influencers, we need AI influencers, it's a whole industry now. The fact that many of the social media platforms are trying to create their own personalities, we just saw Meta launch that.

[00:12:14] The idea that you're going to be interacting with these non-human actors on your, on your platforms there. So the bot is really become this important part and this important feature of contemporary social media.  

[00:12:25] And I kind of have mixed feelings about it because at one point we didn't want entirely you know, problematize it like “bots are only problematic”, but we didn't really deal so much with kind of the downsides of the consequences of bots and ensuring that people know they're interacting with the bot, so making sure there's transparency.  

[00:12:45] Or the fact that we have ways of better adjusting to how bots might manipulate what we think of as popular or not on social media. Which, you know, comes down to everything we have right now, I mean including Drake's complaints about Kendrick Lamar's recent Not Like Us.

[00:13:01] Like, it's a really important part of our media culture and we don't have good answers about what's popular or not on social media. And that's part, because we really haven't reconciled with the influence of bots online.

[00:13:11] Karine Morin: Hmm, and so how that is playing into us making sense of anything that is happening, and certainly politically, I guess you're cautioning us that we're led down certain paths, let's say, that, we should sort of be a little skeptical about as to whether they're authentic, honest, transparent, or just we're really being pulled in a direction that, with a bit of caution, we would have been a little more skeptical.

[00:13:39] So, to continue then in that space of, of what's going on politically and, use of AI, a book that you've just completed, I believe, SimPolitics, there you examine the implication of, for instance, running computer simulations to predict the outcomes of elections. What do you take away of the U. S. election where Elon Musk became such a dominant figure in President Trump's campaign? Do you see some connection there?

[00:14:07] Fenwick Mckelvey: Well, the project that I'm working on is a bit of a strange way because I'm a Canadian writing a book about U.S. politics. And the motivation was that was trying to make sense of how so much of our discussions, and this is really post-Obama, where we couldn't think of how politics changed to happen without involving new technologies.

[00:14:29] Without saying, how is this campaign going to be defined by Twitter, or Facebook, or Instagram, or TikTok. And so, what I was trying to do is unpack, where did we find this link between political change and technological change? Where did that come from? And the project itself began initially trying to compare Canada and the United States.

[00:14:51] And really it became clear that a lot of the genesis, which is something I want to work on a bit more, for these ideas came and really originated in the United States and you can see how it, how it imports, how it moves to Canada. And, so the book itself then is really trying to track this way that we think about politics through technology and how technology itself changes what's happening and how we think about politics.

[00:15:19] So I think when you have Trump and the current U.S. administration, I think a lot of the ways that we want to narrate what's happened has been: it's a result of, you know, what's happening on Meta or what's happening on TikTok.

[00:15:38] And really, I think that there's these deeper issues that the technology lens makes us miss, like wealth inequality, declining hope for the future. And so how do we actually try to decenter technology from being central to being a piece of how do we make sense of what's going on in politics?

[00:15:55] It's really trying to say and point to what are the ways that we think about politics that are limited or defined or shaped by these kinds of intention and emphasis on technology itself.

[00:16:08] Karine Morin: I'm remembering the Obama era and the fact that there was this incredible collection of data and they could, as we understood it, really profile electors and sort of see where to spend their effort, who to be talking to, which doors to knock, and otherwise, I gather, abandoning others.  

[00:16:27] And sort of the decision making that seems to go along with that information and that use of computers and AI is really, coming back to your idea of some are excluded, some may not be sort of, reached in the same ways, getting the same information. And I guess that is a note of caution again for all of us.  

[00:16:47] So I want to come to a little bit of how, I guess perhaps with ChatGPT appearing on the scene, we now see and speak of AI in just about every sector, in every direction. And you've been talking about these risks that you see that AI poses and that you don't think we're talking about as much as we should.  

[00:17:09] And I guess I've just led us to this question: what should we focus more on in terms of this ubiquitous appearance of AI in every sphere of our daily lives or activities, professionally, personally? What concerns you and how should we be, looking further than the hype that we previously mentioned?

[00:17:29] Fenwick Mckelvey: It's a weird time for me because as someone who really was interested in artificial intelligence and how it was, say, filtering phone calls on our mobile networks and really these kind of like nuanced ways of talking about AI governance all of a sudden ChatGPT rolls into town and you know, I knew something like that was coming.  

[00:17:47] But by no means was I prepared for this spectacle - for lack of a better word - that was ChatGPT and especially being an educator working and training students all of a sudden, there's this real existential debate about how do I justify, and defend the intellect, the quality, the capabilities of my students when it seemed to be that, you know, anybody with an arts degree can be replaced by an artificial intelligence, which is like an incredibly bleak future to be all of a sudden normalized and thrust upon so many people.  

[00:18:22] And I think that part of this was really then trying to be conscious that what ChatGPT arrives with is this vision of what the future is going to look like. And that's been backed by Silicon Valley and small tech giants really working at the fringes of society in many ways.  

[00:18:41] And if you look at, say, a lot of the debates about AI doom or existential risk of humanity, you're talking about a very small subset in a very specific way of talking about, hey, what's going to happen with artificial intelligence?

[00:18:55] And to me, it became really clear that how we're thinking and constructing futures, and a future that involves AI, really isn't something that's much of a public conversation. And, ChatGPT is a great example of it, all of a sudden it arrives, and we're just expected to adapt to it.  

[00:19:11] Even though, quite honestly, and this is under Canadian privacy law, there's real debates about whether that technology is legal in Canada. We don't allow for that to be part of the conversation. And that to me was like, this really difficult moment, because I wanted to say that there's ways that our society has made collective decisions that are out of sync with potentially how ChatGPT works.

[00:19:33] And there's a way that it's being pitched that might be just hype, but certainly it's not something that we've had or have the capability of having these kind of discussions about how we're going to shape and direct this technology. Really, what we're being offered up is do you want to pay 20 dollars a month to use ChatGPT or not?

[00:19:49] And to me that that's where getting back to my work as a policy scholar in the public interest, part of what I want to ensure is like defending that there's a capability of democratic societies supposed to understand and govern these technologies and to not think or legitimate the certain future of how we're going to use AI that's being cooked up, in a small group of tech firms.

[00:20:11] And that became like the problem for me, and that's still what I'm working with because everyone's talking about ChatGPT, and it's like, are you just talking about buying a ChatGPT service? And this really is in discussion about artificial intelligence is like, because you bought and sold a prepackaged version of what AI's future is going to look like.

[00:20:28] Karine Morin: Is the genie out of the bottle? Can we be without ChatGPT? Or is there, are there replacements that will come in ways that you think would have been compliant with existing laws. So do you want to predict if, if there's a way back or, as I say, is it out of the bottle and likely there's potentially some tools we're using that came about in illegal ways?

[00:20:53] Fenwick Mckelvey: Well, I'm going to use my genius prognosticator hat, which is a hundred percent accurate - allow me to assure you, never have I ever made a mistake or get things wrong - I'm saying like the things that I don't know is whether we're in a plateau or a peak, whether we're going to see these technologies get better in terms of like their capabilities or it's going to be more marginal.

[00:21:15] And that's a really big question I think there's a lot of debate about how much better these tools will get versus are we going to see tools that are just getting more efficient, which is what happened when at the time of when we're talking with China's DeepSeek which is really called in question about the kind of idea of having really big AI and it's more kind of frugal AI and the effects of that.

[00:21:36] So I think there's this big question for me about where and how much we're allowing ourselves to imagine where this technology is going to go. And so that's kind of a big part of like what the next step is is that are we talking about, you know, a general artificial intelligence? - which some people think is, you know, we're on the verge of - or are we going to be moving towards more like bespoke, super functional ways of using artificial intelligence?

[00:22:00] So ways to write, I certainly hope, ways to help manage my schedule, so I'm not like stressing about like, there's certain ways that I'd love this to be beneficial, but I wonder whether it's going to be this “does everything” or “does a few things” really well type question.

[00:22:15] And, part of what I've really hoped for and really advocated for is that we think about how we can link the future of our public service media with artificial intelligence. There's lots of conversations going on in Europe right now about building a public utility or a public GPT, this is happening in the Netherlands and Germany, and I feel like we're in this kind of rut where we, most Canadians, and there's a lot of evidence, really believe in the future of the CBC.

[00:22:44] This is something that is an example of a public utility that has really benefited millions of Canadians, and yet we're incapable of saying, “Oh, well, something like, you know, technology like social media or AI could ever be something like a public interest or a public service tool like we have with the CBC.”

[00:23:04] And so I actually think maybe it's not stopping it, but it's creating public tools or publicly governed technologies like a “CanGPT” that I like to joke about calling it, that might do this type of thing.  

[00:23:17] And I want to just keep that on the conversation. It's not like this is, I think, the perfect answer, but I feel like if we, if we narrow it to just be like, do we use ChatGPT or not, we forget that there's lots of different ways that the benefits of artificial intelligence or the ways that large language models could be used in society, could be delivered and deployed very differently.  

[00:23:37] And that could also help rejuvenate a lot of faith and hope in the future of something like what we think about as public service media. And those conversations are happening in Europe.  

[00:23:47] And I'd like to see those happening more, and I'm really lucky that they're happening. And some of the work we're doing in Montreal around the idea of an AI Commons, with [...] and Mutek and these great collaborators also at Concordia about this idea of how do we think about AI and delivering AI in ways that really are something that, that there's a public interest and a public value.

[00:24:07] Karine Morin: That seems incredibly valuable because otherwise, we really do sense that this is driven by corporations, that it is corporate interest at play and that, as you've alluded to before, we've really been driven by the sort of the business considerations, the business people, the businessmen, the Silicon Valley type.

[00:24:30] So just so much of this is considered through those lenses of corporate rather than public utility or public goods and, and the commons. So that's, something maybe that I can, ask you to speak further about it because you've alluded to a lot of collaborations and some that we had noted to, highlight perhaps is the work that you've been doing with the Machines Agencies initiative.

[00:24:50] And just more broadly, your work has, and collaborators have come from humanities social sciences, those approaches, how that way of, of looking at new technology can be, informative, beneficial, and how you, bring that forward for us to consider in terms of what is going on with AI around us.

[00:25:10] Fenwick Mckelvey: Yeah, one of the things I've been really thankful about in the changes in my career, being a professor sometimes is a weird thing because you endure, but you also have to reinvent yourself and think about new ways, and I've been really lucky through the Milieux Institute, as well as the Applied AI Institute at Concordia to really be able to develop interdisciplinary collaborations, ways of doing things, and that's across engineering, technology, social sciences, humanities, arts.

[00:25:44] And the focal point for me has been my collaborations with the Machine Agencies Working Group. And Machine Agencies is a play on this idea that there's many types of agencies, many types of human and machine agencies, and what are the ways we interact.

[00:26:02] And arts and research creation, which is a really key term, and I think really an amazing part of what work we do at Concordia, has been about how can we build things that help us make sense of these different ways that relate with human, and computers, and machines, and AI.  

[00:26:18] And so we've done a number of different projects and machine agencies. We've done art exhibits using artists, working with AI. We've created something we call the consultation machine, which is a way of foreshadowing a future where you no longer have to submit to a public consultation and AI will write it for you.

[00:26:36] And then we know that we can improve that or optimize  that even further by having those read and summarized by an AI so that no humans are involved in public consultation. We've also had other creators come in and work in certain forms of like very simple robotic intelligences. This is a colleague, Zeph Thibodeau, who's near completion, so very, very excited.

[00:26:58] As well as making games. And we've also created a board game called Lizards and Lies that was led by Scott DeJong, which was trying to think about how do we describe flows of information, but using a board game to do it and using the format of a board game and that to me, all the same, these kind of collaborations,  it's a fun way of trying to bring people together and trying to decenter myself and try to think about what is a space for people who are interested in what Beth Coleman and Maurice Jones would call “wilding AI.”  

[00:27:34] I like to think about, or my other collaborators in the Abundant Intelligences Network,  thinking about the ways of thinking about Indigenous epistemologies for AI, what are the ways that we can create spaces and conversations, but the many different ways that we might have think about something like an artificial intelligence, and what are the formats we can experiment, the games, the workshops, the art pieces that might help us these different ways of making sense of this thing that we just call artificial intelligence.

[00:28:03] Karine Morin: So it seems like a lot is happening at Concordia because there's also the Applied AI Institute, you just alluded to it. And there seems to be a focus on some of the pressing challenges that we're facing, like climate change, sustainable cities, and looking at how AI may be something that we use beneficially.

[00:28:25] So, can AI be utilized in a way that is good for us? Just now you were alluding to fun and creative, but can it also be a force for good?

[00:28:36] Fenwick Mckelvey: Yeah, this is where I feel like I run the gamut and people make fun of me because I can be the policy wonk and I just love talking about AI governance and that world, and, and that becomes incredibly boring with your other option is creating a board game. Why would you want to hang out with the policy world?

[00:28:52] And I think that like very much the fun here bleeds into the challenge and that they're, they're not, they're part of a spectrum and the part that I've been really lucky. So for the past three years, I've been co-director of Concordia Institute's new Applied AI Institute.

[00:29:10] And part of what we've really tried to think about at the Institute is one, how do we advance and think about building AI responsibly and what is responsibly I look like. So what does that mean? How do we include that in some of the work we do in terms of building AI systems and then also how to kind of emphasize and find ways to do, and include community in artificial intelligence.  

[00:29:31] And so this is one of some of the work we've tried to think about, can we get into community-based research or research with the Montreal community involving the both the impacts and the ways we could build AI. And I would love to think about how this becomes an opportunity for the Institute.

[00:29:46] Because, you know, we've grown, but there's so much opportunity to think about experiments and connecting everyday Montrealers, everyday Canadians with the benefits, and having them feel like they're part of that process.  

[00:30:00] And so that's what excites me about the Institute is we are this kind of space that's, that's building AI and trying to build about how do we build it better and build it in ways that demonstrate that Concordia is bringing a new and novel approach to these discussions about artificial intelligence.

[00:30:17] Karine Morin: Well, thank you very much for showcasing, some of that collaborative work that is going on, that interdisciplinary work that is going on. And I think what we've just been able to, touch upon is the complexity of AI, and yes, we understand it to be technologically quite a feat, but that, through humanities social sciences, we'll bring careful perspectives, we’ll scrutinize, we'll investigate, we’ll question, and hopefully people who've listened to this conversation will just, be a little bit more cautious in their consideration of benefits, but also will recognize potential risks of AI.

[00:30:53] Fenwick McKelvey, thank you so much. It has been a pleasure to have this conversation with you.

[00:30:59] Fenwick Mckelvey: Karine, it's been a total pleasure. Thanks everyone for listening.

[00:31:07] Karine Morin: Thank you for listening to the Big Thinking Podcast. Also, a very sincere thank you to my guest, Fenwick Mckelvey, Associate Professor at Concordia University. I also want to thank our friends and partners at the Social Sciences and Humanities Research Council, whose support helps make this podcast possible.

[00:31:29] Finally, thank you to CitedMedia for their support in producing the Big Thinking Podcast. Join us for next month’s episode, and follow us on your favorite podcast platform to catch it as soon as it’s released. À la prochaine! 

 

Follow us for more episodes! 

spotify logo

Spotify

Icon of Apple Podcast

Apple Podcast

amazon music logo

Amazon Music

Podcast addict logo

Podcast Addict

iHeartRadio logo

iHeartRadio

Podfriend logo

Podfriend