Sustainable AI

Event
June 13, 2024

Big Thinking at Congress 2024

As artificial intelligence grows in scale and prominence, how can we ensure a more responsible and equitable usage of this technology? Our esteemed panel will tackle this subject and more as they discuss AI law and regulation, the ethics of AI and AI algorithms, and the impact of AI on human rights, equity, and social justice from a global perspective.

 

Headshot of Céline Castets-Renard

 Céline Castets-Renard

University of Ottawa

Headshot of Jocelyn Maclure

 Jocelyn Maclure

McGill University

[00:00:18] Annie Pilote: Welcome, bienvenue. I am Annie Pilote and I am Dean at the Faculty of Graduate and Postdoctoral Studies at Université Laval and I'm also the Chair of the Board of Director of the Federation for the Humanities and Social Sciences.

[00:00:35] On behalf of the Federation for the Humanities and Social Sciences and McGill University, I am honored to welcome you to the first Big Thinking lecture of the 93rd Congress of the Humanities and Social Sciences: “Sustainable AI” 

[00:00:48] Today, Jocelyn Maclure and Céline Castets-Renard will draw on their work in ethics, regulation, and impacts of AI to consider how we can ensure a more responsible and equitable usage of this technology. They will be joined in discussion by moderator Anna Vartanyan. 

[00:01:09] Today's event will take place in English and in French, as well as American Sign Language (ASL) and Québec Sign Language (LSQ). We will also be providing simultaneous interpretation and closed captioning in English and in French.  

[00:01:30] Interpreters and closed captioning will appear on the screen on stage and on the Zoom screen for those of you joining virtually. For those joining us in person, to access simultaneous interpretation, please scan the QR code provided in the room, select your preferred language, and press Listen using your own earpiece. 

[00:01:52] For those joining us virtually, you can click on the closed captioning button to enable captions. To use simultaneous interpretation, click on the interpretation button and select the language you would like to listen to.  

[00:02:16] If you are joining us virtually, we recognize that the following land acknowledgment might not be for the territory you are currently on. We ask that if this is the case, you take the responsibility to acknowledge the traditional territory you are on and the current treaty elders.  

[00:02:34] We begin by acknowledging that McGill University, where we are gather today, is on land which long served as meeting and exchange between Indigenous peoples, including the Haudenosaunee and Anishinaabeg nations.  

[00:02:51] The Haudenosaunee and Anishinabeeg peoples have long ties to what is now the Island of Montréal. Kawenote Teiontiakon is a documented Kanien’kéha name for the Island of Montréal. The city of Montréal is known as Tiotià:ke in Kanien’kéha, and Mooniyang in Anishinaabemowin. McGill University is located closest to the Kanien’kehá:ka Nation communities of Kahnawá:ke, Kanehsatà:ke, and Akwesasne. 

[00:03:22] We acknowledge and thank Indigenous peoples whose presence marks this land on which we gather today, and for their valuable contributions past and present. 

[00:03:36] The Big Thinking lecture series at Congress brings together academics and public figures to tackle some of the most pressing issues of our time. For Congress 2024, the series amplifies the theme of Sustaining shared futures with conversation that look at what is still possible to achieve together – and what needs to be done – in the face of this vast and complex imperative to produce solutions for current generation, and ensure future systems. 

[00:04:07] You can participate in the conversation on social media using the hashtag #Congress, Congress with an ‘h’ at the end. 

[00:04:14] On behalf of the Federation and McGill University, I thank the series leading sponsors – the Canada Foundation for Innovation and Universities Canada. Thank you all so much for joining us today. Please welcome Dr. Roseann O’Reilly Runte, President and CEO of the Canada Foundation for Innovation, who will introduce today’s conversation. 

[00:04:39] Dr. Roseann O’Reilly Runte: Good evening everyone, thank you Miss Pilote, that was a great introduction to Congress and to the work you have done with all researchers across the country. It is my pleasure now to present today’s panelists. 

[00:05:03] But first, I will tell you what exactly CFI is. I think most of you know what CFI is, but in case you don’t, we have the mandate to produce infrastructure that will equip researchers in every discipline with tools and facilities they need in order to pursue ambitious ideas, to respond to emerging and sometimes urgent social and economic needs, to seize opportunities and create meaningful insights for solutions to Canadian society.  

[00:05:40] We've always supported research in the social sciences and the humanities, but this year we have made a special effort to ensure that the humanities and social scientists of Canada feel welcome to apply for the infrastructure they need in an increasingly digital world.  

[00:06:00] Congress provides a forum to discuss big ideas and to help advance our understanding of the evolving and complex issues that shape the culturally rich and diverse world in which we live. So, congratulations to McGill University for hosting the event, for the Federation for the Humanities and Social Sciences for organizing the Congress, and I really think they went way beyond what was necessary 

[00:06:31] Because this morning, if you opened The Globe and Mail, you saw an article by Geoffrey Hiton on the topic of artificial intelligence and its impact on society. So, what a great way to set up this talk. Thank you very much for that.  

[00:06:41] Today's Big Thinking lecture - “Sustainable artificial intelligence” - perfectly reflects the theme of the Congress this year of Sustaining shared futures. As we continue to explore ways to take on multifaceted challenges through concerted action across disciplines, by focusing on unequal impact and on solutions, we know that artificial intelligence has grown in scale and prominence, and it’s now our responsibility to ensure that the technology is used in an equitable and fair way.  

[00:07:32] We are joined by two distinguished scholars for today's session. Jocelyn Maclure, a member of the Royal Society of Canada, and Full Professor of philosophy has been nominated Jarislowsky Chair in Human Nature and Technology. First known for his work in moral, political and legal philosophy, he now focuses mainly on the speculative and practical philosophical questions raised by advances in artificial intelligence.  

[00:08:11] His most recent articles appeared in journals such as Minds and Machines, AI & Ethics, AI & Society. In 2023, he was a Mercator Visiting Professor for AI in the Human Context at the University of Bonn, in Germany. Outside academia, he also chaired Ethics in Science and Technology Commission, which is an advisory body of the Québec government. He also served on the scientific committee of the Montreal Declaration for the Responsible Development of AI – it's a declaration that has a huge impact throughout the country.  

[00:09:03] When I travel through the country, when I asked our researchers, “What do you think about artificial intelligence?”, the answer was always to say that they were proud that Canada made such a declaration - the Montreal Declaration - so we are known as the country that respects ethics in the development of artificial intelligence.  

[00:09:30] Céline Castets-Renard, is the Research Chairholder Accountable for artificial intelligence in a global context at the University of Ottawa. She also holds a research chair in Law, Accountability and Social Trust in AI and its named, ANITI - Artificial and Natural Intelligence Institute – at Toulouse, in France.  

[00:09:59] She is an expert member of the European Commission Observatory Online platform economy. She is a former junior member of the “Institut Universitaire de France” and fellow at the Internet Society project, Law faculty at Yale University.  

[00:10:17] Her research generally focuses on the law and regulation of digital technologies and artificial intelligence in comparative perspective. Especially the protection of personal data and privacy, e-commerce, ethical issues related to regulation of autonomous vehicles, policing technologies, online platforms, and cybersecurity - that's quite a lot - she also studies the impact of technologies on human rights, equity and social justice in a global perspective, particularly in the north-south relationships.  

[00:10:59] The panel will be moderated by Anna Vartanyan. She is the director, artificial intelligence and sustainable development at Mila, Québec’s artificial intelligence Institute. She spent more than 15 years, and I think she must've started before she graduated from elementary school, working for the United Nations, supporting developing countries in transitioning to green technologies.  

[00:11:29] She joined Mila to help develop its sustainability strategy and focus particularly on the use of artificial intelligence for climate change. Ladies and gentlemen, please join me in welcoming the panel.  

[00:12:03] Anna Vartanyan: Thank you very much Roseann. We have a very interesting topic today to discuss “Sustainable AI”. So, we’re going to try to approach Sustainability from a more larger, societal perspective and when it comes to AI, in the last few years, its vast deployment created a huge excitement around it, but at the same time, of course many of us have used ChatGPT, and we have altered images with AI. 

[00:12:32] But with that excitement, and that novelty came also a lot of societal questions from more global, existential, such as “Will AI take over humanity?” To more concrete and specific, “Can AI help those societal processes?” or “Can it create challenges for our institutions?” and in terms of privacy and human rights, “What are the implications of AI?”  

[00:12:57] And on a more positive side, in my own work, we work a lot on climate change, so what are the implications of AI that can be useful for society? So there are a lot of questions, and I hope we can go through some of them today. With this, I'd like to give the floor to Jocelyn.  

[00:13:13] Jocelyn Maclure: Alright, thank you so much Anna. Hi, everyone. Thank you to the Federation for the invitation. So, we will have a bilingual discussion today. I will make a few remarks in English and Céline will follow in French. 

[00:13:30] All right, so, as a philosopher, I want to submit to you a few distinctions, just to try to figure out what is going on with AI today and what we should think about it. I've been working on AI for now about seven or eight years, and I'm not a computer scientist at all, so I had a lot of work to do to understand progress in AI, why after a couple of decades where the research agenda was stagnating why did it take off again?  

[00:14:09] I had to do some considerable work to figure it out, and sometimes I try to take a step back and say, let's say that I didn't turn my attention to AI, and I would just as a concerned citizen listen to what is being said in the public sphere about AI, and I would be totally lost, because it's quite dazzling what is being said, and people are defending very contrasting views on AI.  

[00:14:42] OK, so the first distinction, if we want to think about the impacts of AI, how it will transform human life, we should distinguish between actually existing AI systems that are based on different machine learning techniques, algorithms, these are the AI systems that we use in many different sectors of human life, that's the AI systems that are at the root of, say, search engines, social media and other digital platforms.  

[00:15:18] These are the AI systems that we can use now to make increasingly high-stake decisions in the judicial system, or police forces or in human resources and so on. These are the AI systems that are behind what we call generative AI now, ChatGPT and other large language models that we can produce answers in different natural languages such as English and French and so on, or that can produce content such as images, videos, and so on.  

[00:15:56] So these AI systems are the ones that are transforming human life now in pretty much all spheres of human life. But when people talk about AI and its impact on human life, they often refer to what I call “possible AI systems or AI technologies.” AI that we never saw, that are not conceived as of yet, that -- but that are logically possible, and we could move toward them if progress is steady, but there is no guarantee that we will ever be there.  

[00:16:31] This is what is referred to as AGI, artificial general intelligence, human level intelligence, and sometimes we talk about artificial super intelligence. AI technologies, AI systems that would be cognitively even superior to humans.  

[00:16:51] We often talk about conscious or sentient AI, so as far as we can tell, the algorithms that we use now that we refer to as AI systems, they don't feel anything, they don't have subjective experience, they don’t have emotions, they don't feel anything, they cannot suffer and so on.  

[00:17:10] But many serious researchers, and there was a reference to the interview given by Geoffrey Hiton in The Global and Mail in the past few days, and Hinton - who is a deep learning pioneer, he's one of the godfathers of the kind of AI that we now use - thinks that it’s a matter probably of years before we reach both artificial general intelligence, and he thinks that these AI systems - if they are not already - that they will be actually conscious or sentient. So that’s from someone who has been instrumental in recent progress in AI. 

[00:17:50] One thing we need to keep in mind is that, we are not there yet, and there is no guarantee that we will ever reach that kind of state because all the cognitive agents that are very sophisticated, like humans and nonhumans, animals that we know, intelligence and consciousness are based on biology.  

[00:18:16] These are properties that emerge in different living organisms that strive to survive and adapt to the environment, to their world and so on. So it could be that the kind of multidimensional intelligence that we have is limited, but we built this civilization and so on with that kind of intelligence, that the multi-dimensional intelligence requires something like a body that connects us to the world, and the capacity to feel pain and pleasure, the capacity to suffer, the will to survive and so on.  

[00:18:59] So perhaps all these features of human and animal life are necessary conditions for higher forms of intelligence like the one that we have. So, when we think about, is it true that AI is creating something like an existential risk for humankind, that is predicated on the idea that we will reach artificial general intelligence and perhaps conscious AI at some point in than your future but there is absolutely no guarantee that that is even doable, right? We cannot exclude the possibility that we will get there, but there is absolutely no guarantee that machines can reach that state.  

[00:19:40] So that is something that we need to keep in mind. OK, I'm not sure how many minutes I still have – two, ok. As you can tell, I have something like what I call a deflationary view about these very strong claims made about the future of AI.  

[00:20:00] But, what I want to add that once we said that, we do have a very powerful AI systems as we speak now, and large language models are very, very impressive, we can query them about complex topics and they will give you sophisticated answers. So actually existing machine learning based AI systems, they do create very important ethical risks, and we do need to take these risks very seriously.  

[00:20:32] So I don't want to downplay the risk of actually existing AI systems and in my work as an AI ethicist, I think about AI's explained ability problem, the fact that there are black boxes, they come up with outputs, but they don't know why they came up with that particular output, so that creates major problems when we use them to decide who is going to have a loan or who is eligible for bail, for instance, or who will be hired for a particular job.  

[00:21:02] We cannot provide explanations for the outputs. They can be used in different ways that can lead to different forms of discrimination against members of different groups. That’s something that I'm sure we will talk about today. We rely increasingly on generative AI, but we know that generative AI very often can produce accurate answers, sometimes they make stuff up.  

[00:21:29] They confabulate, they create answers that are very plausible and that are very well expressed at the level of syntax, but semantically, sometimes they are off, they are inaccurate. So these tools can be used to generate this information, like massively, at a very large scale.  

[00:21:53] So we could go over the list of the kind of ethical issues that these systems raise, and we can use them you know some people are training generative AI to kind of create digital avatars of a close one who died recently

[00:22:11] It's used in all spheres of our lives, from the most political ones to the most intimate ones, and we don't know yet how that will change us. That's why I think we need to take AI very seriously, and Céline will talk about legal regulation that is required. Be that as it may, when we talk about possible very strong forms of AI, there is no guarantee we will ever be there, so let's focus on the kind of risk that the AI that we know actually creates. Thank you.  

[00:22:47] Anna Vartanyan: Thank you so much Jocelyn. Céline, the floor is yours. 

[00:22:51] Céline Castets-Renard: Thank you, I also would like to thank the organizers for inviting me. It's a true honor to be here with you and I'm delighted to discuss with Jocelyn, it’s a real pleasure.  

[00:23:02] And I'll easily bounce off what Jocelyn said because we're in perfect agreement, let's deal with the current risks of AI and see if other risks come along. It's true that we already have a lot to consider because, in my opinion, the most important issue for legislators, regulators and thinkers in general at the moment is to try to sort out the benefits of AI from the risks.  

[00:23:31] And of course to try not to be too restrictive to prevent the benefits we can derive from it, In particular, and the health sector is often mentioned, but we also know that the risks are very high in the health sector, so we need to take a case-by-case look at the types of AI we implement, the types of AI systems, the uses we put them to and, above all, the way we design these AI systems. 

[00:23:58] And from a legal point of view, what we can say right away is that these AI systems are disrupting existing law, and are also forcing us to think in terms of AI-specific law. 

[00:24:12] As for the first point on existing law, when we think of language models, for example, these are models that are built on a large amount of data, often harvested from the Internet or various sources, and this data is often protected, protected as personal data, for example, or data protected by copyright or business secrets. 

[00:24:38] So we can already see that we're undermining existing legislation. We can also think of the problems of competition, because these AI players are obviously the same players as in the digital sector, and we know that we're going to find language models like Llama developed by Meta, as well as Google, Microsoft and so on, so the same players who already dominate the digital world. 

[00:25:08] The question of developing alternative AIs and perhaps different models is a difficult one, because it requires a huge amount of resources, a huge amount of data and a huge amount of means, which also ties in with the environmental issue and the question of the frugality of these models. We may need to think about an AI that isn't necessarily greedy and does things differently. 

[00:25:45] We know that smaller language models can be developed, so we also need to take environmental issues into account, and perhaps ask our players to make a little more effort in this area. 

[00:26:01] So over and above existing rights, of course privacy and personal data, we can think of the risks of a surveillance society, because AI tools such as facial recognition, for example, which would be deployed on a large scale in public spaces, would obviously infringe on freedoms such as the freedom to demonstrate, the freedom to come and go, and so on. 

[00:26:24] So there are a lot of human rights and fundamental rights issues at stake, and given the scale and characteristics of AI today, we think it's time to adopt legislation, as Europe has done - Europe has just adopted a major regulation, a major text - and as Canada is currently considering through Bill C-27 at the federal level, and Quebec has also held public consultations on the issue, notably on the framework. 

[00:26:56] So why do we need a special law? Why can't we make do with the law we already have? I mentioned privacy, personal data, principles of equality, non-discrimination - these are things we already know about, of course, and which are already protected. 

[00:27:10] But it seems rather difficult to make do with existing law, given the characteristics of AI, and in particular the opaque characteristics mentioned by Jocelyn. If we don't include in the law the means of explaining, the means of transparency, the means of human control of AI systems, then in the end we're going to have an opaque AI, an AI that may not be sufficiently controlled by humans. 

[00:27:36] And so, if we don't have a minimum of requirements to know how AI should be done, it's likely to be to the detriment of existing laws, and above all to the detriment of human interests. 

[00:27:48] So we need to lay down clear rules on how to make AI socially acceptable. We probably also need to go further and ask ourselves whether all AI systems, all AI uses, are good enough, and perhaps we need to exclude some of the various possibilities and decide not to deploy certain AI systems, or at any rate certain uses, certain purposes, and so this is the whole question of the risks that AI can produce, some of which may be considered socially unacceptable, and in these cases we would decide not to deploy these AI systems. 

[00:28:27] This is what the European Union is doing by designating 8 AI systems - I'm not going to give you the list, I'll spare you, but I'll give you an example - for example, we're banning Chinese-style social rating, which is a bit of a model, an exaggerated scarecrow, but it's certainly the model we don't want in Europe. 

[00:28:46] We also prohibit large-scale use of facial recognition in public spaces by law enforcement agencies - although there are a number of exceptions. 

[00:28:56] We also ban systems such as ClearView AI, the American company that collects lots of faces on the Internet to train an AI system and then sells facial recognition AI systems to the police, for example. 

[00:29:17] In Canada, in the Bill C-27 that I mentioned, there are no prohibitions, unfortunately, but I think it's something that should be important because it's also a way for a society to set its limits and recall its values, and I think that Canada could have things to say just as much as Europe, and probably even things that are more specific in terms of culture, for example, cultural diversity, I think that Canada could want to prohibit certain uses or certain purposes.

[00:29:45] And beyond this example of unacceptable risks, what we can learn from this legislation is precisely that we are adopting a risk-based approach, with a gradation of risks, and so we are obviously going to require compliance with certain obligations for high-risk AI systems in Europe - or high-incidence systems, as we say in Canada. 

[00:30:10] In addition to these types of risk-based legislation, there are also discussions in other countries, notably the United States and the United Kingdom, which are placing a strong emphasis on AI safety, and ten or so countries today want to create an AI safety institute, including Canada, the United Kingdom, the United States and France. 

[00:30:42] Here, the idea is perhaps more to protect national interests, national security, national sovereignty, because we're afraid of cyber attacks in particular, all kinds of attacks via AI, and that's also a major issue today. So the two approaches can go hand in hand, but in any case, that's more or less what we're seeing at national level, if we compare the different member states.

[00:31:13] And then, and I'll end here, at international level too, classic international organizations such as the UN are starting to take back a little of the leadership that has been somewhat neglected until now, the OECD is also very active, as is the Council of Europe, which has adopted an international AI treaty for signature by all member states of the Council of Europe, but also for others wishing to join this text.

[00:31:40] UNESCO has long been working on ethical principles that incorporate a cultural dimension. So, to sum up, we can say that today we're aware of what's at stake, and we think that the rules that already exist aren't enough, whether ethical or legal, so we need to provide a specific framework for AI at national and international level.

[00:32:05] Anna Vartanyan: I have so many questions, let’s start with you Céline, you mentioned about differences in regulation between North America and Europe. I'd like to touch on the subject, what is the impact of AI on a global north-south inequalities, if you can talk a bit more about your work in that area and share with us your knowledge. Thank you.

[00:32:27] Céline Castets-Renard: Thanks for the question, there are many points of view to be had on North-South relations. What we can say straight away is that the “Global South” - I don't really like this expression because the realities are so different, but let's use this term - the states that make up this Global South are well aware of the stakes of AI and don't want to let themselves be distanced and, above all, don't want to see Northern standards imposed on them, so there's already an awakening in international institutions, which is fortunate.

[00:33:00] And it's true that there's a twofold risk of colonization, let's put it that way, by technology and by international standards, so it's true that it's important for states to express not only the type of framework they'd like to have, but even more so, it's important for civil society, for individuals, to take up these issues and be able to say what problems they're facing, and whether AI is useful in responding to their problems. I think this should always be the first question for everyone, not just for the South.

[00:33:38] And maybe the problems are specific, and maybe the solutions in the North don't match the solutions in the South. Beyond that, we also know that the data used to train AI systems, for the system to be effective and for AI to be of any help, must correspond to the population to which it will be applied, so there's also a question of data situation, data localization, AI situation - to use Sandra Harding's expression of “situated knowledge” - but it's true that the idea here is to try and bring something specific to the table, and so obviously the countries of the South need to be involved in the solutions that will be applied to them.

[00:34:24] I'm still worried - that was the nice speech - but I'm still worried because we can also see that the South is often used, unfortunately, for tests, for testing technologies, and also for the possibility of capturing personal data very easily.

[00:34:44] In Senegal, for example, Microsoft is offering tools and teaching aids to schools free of charge in exchange for the collection of student data.

[00:34:58] We also know that surveillance tools - all states are very fond of oversight tools in reality - and the countries of the South are no exception, so from the point of view of democracy, and the weight of certain governments in the South, we also know that partnerships have been signed with China but also with the United States.

[00:35:25] We're also testing solutions, in legalized areas, in places where we'll be less closely watched, in places where there may be personal data protection authorities but they're not very strong, and so the things we can't do at home, we're going to test elsewhere, and that's regrettable, more than regrettable.

[00:35:48] Anna Vartanyan: Thank you so much, little follow up on your response to the question - this is to both of you actually - when sustainable development goals were developed, AI was not really in the picture that much, it was still a technology that existed, but it was not as high of concern as it is now.

[00:36:11] How do you think it can be incorporated now as the G7 are in 2023, can we really wait to 2030 to start bringing this into discussion, and if not, how can we start incorporating it now already, both from the perspective of inequality but also from the perspective of access to digitalization and technology, which is another topic. What is your opinion on that? How do you incorporate that?

[00:36:35] Jocelyn Maclure: Thank you for the question. It's not an easy one. Fortunately there are an increasing number of researchers working on AI for reaching our sustainable development goals, but also about the environmental impacts of current AI technologies.  

[00:36:57] As with regard to other societal issues related to AI, I think that we should start from a position of open-mindedness, and look at the actual uses and effects, impacts of AI technologies. In that space, many are saying, yes, we can surely use AI to reach our targets.  

[00:37:25] One thing that machine learning is good at is coming up with predictive algorithms that could be used to predict when a particular building will need more energy, but when can we reduce also the energy that is being consumed and used in buildings such as this one, so that we optimize how we consume energy. So that's very promising, and I hope that researchers are hard at work coming up with actual algorithms that we can use to consume less energy.  

[00:38:06] But right now, how much of these systems are actually being used and deployed, we hear about them, but they are not being deployed at a large scale. But I hope that we will get there. And AI can be used in other ways also. I think that you alluded to it, so we can use them to predict where we should plant more trees, or where it is more likely that we will have fires in forests, or we can use them like machine algorithms to kind of track and monitor endangered species and come up with better policies to protect them.  

[00:38:51] These are all cool applications, but it looks like the scale is not at all where it should be with regard to the needs that we have. But on the other hand, being open-minded but also realistic, we need to acknowledge that now we need a tremendous amount of energy just to train one large language model or any kind of deep neural networks. We need lots of energy to train them, to have your data centers, even in hydroelectricity rich Québec now, we cannot really have new data centers. We don't have the capacity.

[00:39:27] So that is one thing, can the AI sector offset the energy consumption? That would already be huge but we’re not there yet. The other aspect is that it is not a fully de-materialized sector of the economy, you need all these powerful chips, you need very powerful supercomputers, you need the hardware, and very often the natural resources are extracted in the Global South with workers that are being exploited and so on.  

[00:40:01] So right now, I don't think the picture is very positive. I hope it will become more positive as we move along, but I think that, again, the priority is how to reduce that cost and these environmental impacts, and as of yet, we are not there yet I think.

[00:40:24] Céline Castets-Renard: I agree with what Jocelyn has just said, and I really have the impression that we have environmental policies and AI policies, and that from time to time we're going to raise the issue of resources, especially for geopolitical reasons, because we need to extract minerals elsewhere or because we need microprocessors that are in other countries, so this generates a geopolitical vision that's a little more global and therefore environmental in nature.

[00:40:57] But I don't get the impression that we're making enough of an effort to link up the issues and perhaps make more demands on the most powerful players, because in the end, if we wait too long to get these obligations - obligations to minimize the use of resources, for example - if we wait too long, it means that the dominant players will be able to, will already be rich enough, well enough established, and so it won't be very difficult for them to comply with the new rules, as we systematically see.

[00:41:37] While new and smaller players risk disappearing. So it's always a question of waiting for regulations, always thinking that they'll encourage the first ones, it's always a question of rewarding the leaders. So we mustn't lose sight of the fact that if we wait too long, and if we don't lay down requirements right away, we'll find it hard to have a diversity of leaders because it'll be too expensive to comply, whereas if the market is organized from the outset, with environmental requirements, it'll be easier to adapt because we'll be integrating from the start.

[00:42:13] And I'd like to add that these public policies - i.e. environment/IA - also include financial and tax incentives, as well as research incentives, so perhaps more resources could be devoted to encouraging models that consume less, to encouraging reflection on AI systems that consume less, Rather than trying to be all things to all people, to have these general-purpose models which, by definition, can be used for anything, but the power that implies still raises the question of other choices, such as having AI that's a little more specific, like goldsmithing.

[00:43:01] And it's true that it's this American model - because it's very much dominated by American companies - it's today this model that has taken over the market, but maybe we should - while there's still time - try to think about other types of AI and fund research, and fiscally encourage start-ups or research teams that are trying to think differently.

[00:43:30] Anna Vartanyan: Thank you both so much. You touched so many important points and very close to my heart. And I’ll close on that and give time to the audience but ne quick remark, it's interesting you spoke about the energy and the carbon intensity of certain applications, and for our audience, most of the carbon intense applications are related to large language models such as ChatGPT, but positive applications of AI such as climate change, most of them are not that carbon intense, so it’s very interesting that you spoke about this and I couldn't resist jumping on that.

[00:44:04] And a great example, we have a Montreal-based company – Whale Seeker – that is doing incredible work for protecting the biodiversity of mammals. It's an amazing company, the pride of Montreal, and I couldn't resist mentioning that. I will give the floor to the audience now and I'll have a few more questions if we have time in the end, but let's see the audience first. [...] No one wants to break the ice? There you go.

[00:45:07] Audience member: [...] I feel that much of society or civilization is built based on jobs, and I feel even the current advancements in AI would take away much of the jobs, and I know that the more that we get equipped with AI, it presents more opportunities. For example, my father still hasn't learned how to use mobile. I haven't learned how to use Instagram still. So, I think there is a learning curve that is not going to change for humans. My question is more on what are the initiatives that are happening, what is the space so that -- these advancements are most sustainable?

[00:46:25] Jocelyn Maclure: Thank you for your question, and I had to find out about Instagram in the past few years because of my kids basically and I can assure you you're not missing out on anything valuable or important as far as I can tell, quite the opposite. But more seriously, it's a very good question, and the way that you answer it, especially about the impacts of AI on work, on the place of work, in our life, on the availability of good, well-paid jobs, truly depends on your attitude towards the future development of AI.

[00:47:07] Again to refer back to Geoffrey Hinton, in that same interview, he said that we will soon be out of work, all of us, basically. Not only those doing manual, physical labor, but also those who are doing cognitive labor. I disagree with him, when you look at the limitations of the best AI models now, it tells us that we are not about to be replaced. Of course there can be a few exceptions, some of us are more vulnerable here and there, but overall, I think the most likely impact is that in many areas, AI will be integrated in the job market, in the work environment, and many of us will have to learn how to work with AI.

[00:47:58] But because there are things that AI does well and things that it doesn't do well, in many cases, it can augment our capacities, but we need to keep control, maintain control, and human expertise and judgment is still required to make sure that we are able to say, that's just a confabulation made by the large language models, or that outcome is actually, comes from a bias against a group, or having the capacity to explain and justify our decisions and so on.

[00:48:43] So in many different ways, human intelligence is still required. So I think that it's more about how do we learn to work with AI so that it augments our capacities, but that we are aware of its limitations, also, just to conclude, if we rely blindly on the large language models, we will often be deceived, because the answers are always very plausible. They are so well structured, but if we don't know a subject area, we will not know that it’s actually false, inaccurate, or not specific enough. I keep saying we need to keep cultivating human expertise and judgment, because it is simply not true that the best AI can replace human intelligence as we speak.  

[00:49:38] Céline Castets-Renard: I'd just like to add that, once again, I agree with Jocelyn, and to have true human control, you really need to retain expertise, and a whole field of research and experience is developing that concerns human/machine interaction, and I think that today this is exactly what we need to be thinking about: what does the human do best and what does the machine do best? 

[00:50:04] Beyond that, I also think we have to be careful not to delegate interesting tasks - a professor from Colombia, Tim Wu, said a while ago about the law, about the legal professions, that more and more tools are integrating AI because these are very document-based professions, so it's clear that AI is very useful for doing research, sorting files, looking for evidence, etc. so it's quite useful. But, for example, it would be completely wrong to delegate legal strategy to a machine, because that's something that stimulates us intellectually. 

[00:50:42] In the end, it's easy enough to ask a language model “what should I do in this case? “So I think we have to continue not only to be trained, to have a critical eye and to maintain our expertise, but also to maintain the interest of our work, because if we're just there to control what the AI does, or to go and label data or label information, it's really rather sad to be human in that context. So I think we need to keep the interesting part of our jobs. 

[00:51:24] Anna Vartanyan: Thank you both. I think there’s another question. 

[00:51:31] Audience member: Dr. Castets-Renard, you mentioned the diversity of organizations and how it is important in the field of AI, but I feel like the government investments into AI - including all our research institutions - are going into the technology, but not into the different organizational forms, like can we have more nonprofit AI development organizations in cooperatives, collectives, data trusts. The government does not want to - it seems - it does not want to invest in the organizational forms and research on those organizational forms.  

[00:52:20] Céline Castets-Renard: It's a good question, the share of government funding and the choices that are made - I can talk about what I know - I know a little more about French funding at the moment because there's just been the announcement by the Macron government which, in reality, isn't a very strong announcement for AI, the amount of funding is rather low because it's a period when state budgetary resources are still quite limited and I think that priorities in France are going elsewhere particularly on a whole bunch of construction as part of the 2024 Olympics. 

[00:52:58] For the Canadian government, proportionally speaking, I think the investment is higher, I also think - and here we're on record, too bad - I think there's a whole pan-Canadian AI strategy that's being implemented by CIFAR (editor's note: Canadian Institute for Advanced Research) within three AI institutes, and today for the research part it's mainly the Mila, I think I'm right in saying it in Montreal, that gets the biggest share. 
[00:53:27] The other two institutes in Edmonton and Toronto, the Vector Institute in particular, are perhaps more supported by companies, or in any case they encourage more industrial AI or applications, but in any case the research part seems to me to be located in Montreal. 

[00:53:47] In any case, all governments - and I think this is true in the U.S. as much as anywhere else - are trying to encourage public-private partnerships, so that researchers can turn basic research into applied research, so that it can be operational for local businesses and so that there is a return on investment, if I can put it that way, for local businesses and ultimately for the population.

[00:54:13] Which isn't easy, because more or less everywhere, what we're seeing is that as soon as there's a company that's doing well, a start-up that's doing well, it tends to be bought up by bigger companies, especially American ones, and we're having this big debate right now in France about a language model - Mistral AI, which you may be familiar with - that's “threatening” to leave France and Europe because it doesn't have the resources, It doesn't have the necessary scale-up, and it's part of the discussions in Europe to create a capital market to match the superior financing required, which is more readily available in the United States. 

[00:54:54] So there are a lot of questions and a lot of answers, and it's true that it's an international game, and as soon as a company does well and has a good product, the tendency is to go to the Americans for better financing. 

[00:55:10]  Anna Vartanyan: I know we’re running out of time but I want to mention that Mila is one of the – like Céline said – AI research institute, we are a nonprofit, the other one is Vector in Toronto and in Alberta it’s Amii and like I said, Mila is a nonprofit organization and we do different type of research and health and environment where I am specializing and also applied research.  

[00:55:33] The focus of Mila specifically is on bringing AI for humanity, so hints nonprofit. So thank you Céline for saying that, and it's important to know that we do have that capacity in-house in Montreal, and a lot of wonderful computer scientists and also noncomputer scientists who are doing a great job.  

[00:55:55] I think we are running out of time. We are actually out of time, so with this I would just like to thank you Jocelyn, thank you Céline for a wonderful discussion in bringing your passion today to this floor and thank you to the audience who asked questions and those tuning in online and with this I give the floor to Annie. 

[00:56:14] Annie Pilote: Thank you to our panelists, Jocelyn Maclure and Céline Castets-Renard, and our moderator Anna Vartanyan, for this thought-provoking conversation. On behalf of the Federation for the Humanities and Social Sciences and McGill University, thank you to the Big Thinking Series leading sponsors who’ve supported this event: The Canada Foundation for Innovation and Universities Canada.  

[00:56:44] If you'd like to revisit the conversation, the video will be available on the Congress platform in the coming days where you can view it until July 31, 2024. Today's conversation is the first in the Big Thinking events in Congress 2024. I trust you found this conversation as enlightening and thought-provoking as I have, as we have continued to reflect on the theme of Sustaining Shared Futures.  

[00:57:14] The next Big Thinking panel, “Sustaining Culture,” by Janine Elizabeth Metallic, Ryan DeCaire, and Mskwaankwad Rice, will take place Sunday, June 16, from 12:15 to 1:15.  

[00:57:29] As Congress continues, you can participate in the conversation on social media using the #Congressh. I also invite you to fill out our short survey on your experience at today's Big Thinking lecture, using your mobile device, you can scan the QR code on the screen behind me or on the signs posted at the door to share your feedback.

[00:57:54] Thank you to all of you who have joined us today. Please enjoy the rest of the day and the upcoming Congress sessions. Thank you, and goodbye.