2025 Archives | Challenge Conference /challengeconference/story-archive/2025/ Ӱԭ University Thu, 15 May 2025 19:48:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.1 Avoiding the ‘Paperclip’ Conundrum: Innovating with AI While Limiting Risk /challengeconference/story/ai-innovation-limiting-risk/ Thu, 15 May 2025 14:46:58 +0000 /challengeconference/?post_type=cu-stories&p=1721 The Ӱԭ Challenge Conference tackled one of the central challenges of artificial intelligence (AI) — the tension between its immense possibilities and potential peril — during a panel discussion exploring the ethics, policy, governance and risk dimensions of AI.

When thinking about the risks and ethics surrounding the technology, researchers contemplate the hypothetical AI paperclip factory, explains Mary Kelly, professor of Cognitive Science at Ӱԭ University.

Although paperclip manufacturing “sounds innocuous,” the dystopian fear is that a factory run by AI could be so driven by the quest for productivity that it will start ignoring obvious ethical boundaries to maximize output. In what Kelly calls “a very fanciful scenario,” the AI factory “starts regarding humans as potential paperclip material.”

“The paperclip AI is not malevolent in that way that a human is malevolent. It’s malevolent in a way that … it doesn’t even understand that humans are something that can be harmed.”

Five people sitting on a panel discussion during a conference to discuss AI and risk.

Panelists at the 2025 Ӱԭ Challenge Conference — The AI Summit: Navigating Disruption and Transformation — discuss the ethics, policy, governance and risk dimensions of artificial intelligence: (left to right) Ӱԭ University Cognitive Science Professor Mary Kelly, Jordan Zed of the AI Secretariat at the Privy Council Office of Canada, Kathleen Fraser of the National Research Council of Canada, Kate Purchase, Microsoft’s senior director for International AI Governance and moderator Allan Thompson, director of Ӱԭ’s School of Journalism and Communication.

Humans, on the other hand, understand this.

“We have empathy,” notes Kelly. AI engineers are “trying to replicate the capacities of the human brain. As far as we’ve come — and we have come very far in these past few years — there’s still big differences between what the human brain can do and AI can do.”

She adds that AI technologies “first have to be able to reason about the existence of human minds and be able to predict what humans want or need in order to be safe.”

“Right now, they’re not good at it,” says Kelly.

That’s why, the panelists agreed, that when developing and implementing AI technology in Canada, it’s essential that its human creators guide innovation with human rights, fairness and public safety in mind. Part of the challenge is ensuring that policy and government regulations keep pace with the rapidly changing technology.

Three people having a conversation during a panel discussion.

Guardrails, Governance, and Canada’s AI Opportunity

Still, countries around the world are rushing to operationalize the world-changing power of AI even as guardrails are being erected to ensure its safe application. That’s why, while conference-goers were gathered at Ӱԭ to explore the challenges confronting Canada in the age of AI, the Canadian government was naming its first federal Minister of Artificial Intelligence and Digital Innovation — former CBC and CTV broadcaster Evan Solomon.

“Even though there’s such a strong base of AI research in this country,” said Jordan Zed of the AI Secretariat at the Privy Council Office of Canada, there’s “an opportunity to do much more, to think about how we can be leaders in this space, to have greater coherence across the government, across the country — and bring a clear message to our engagements internationally.”

When discussing risks, people often think about the negative implications of AI – out-of-control machines or powerful technologies in unethical hands. But Kate Purchase, senior director for international AI governance at Microsoft, stressed that it’s also important to keep innovating and implementing AI technology because “there is a risk of being left behind,” as well.

She said people often assume that the countries that develop the largest and best language models are going to be the ones that win the AI race. Rather, “it’s about who adopts it fastest, and that’s who ultimately see the greatest gains.”

There seem to be major advancements in AI every week, noted Zed. Years of work have gone into establishing ethical principles for AI and transparency around its development. However, a major barrier to implementing AI remains the divided opinion within Canada and in other countries about what the principles should be and how rules around AI should be applied.

“It will be imperfect, it will be fragmented,” Zed predicted.

“That only means the opportunity is even greater for countries like Canada to help navigate this space and to provide the leadership and bridge some of these divides.”

As Canada is the president of the G7 this year — and will host a summit of world leaders June 15-17 in Kananaskis, Alberta — Zed said this country can play a leadership role.

“AI will almost certainly figure prominently into the discussions that take place next month.”

A panel discussion and the audience sitting in front of them.

The Bias Beneath: AI’s Blind Spots and Social Impact

Alongside excitement about the profitability and future use of AI, a key consideration is mitigating the bias that artificial intelligence systems have learned from being trained on information that perpetuates stereotypes and discrimination in society.

For example, if you ask AI for a picture of a scientist, it will spit out an image of an old white man with wild hair and glasses, said Kathleen Fraser, research officer at the National Research Council of Canada and adjunct professor of Computer Science at Ӱԭ. Likewise, AI can reinforce systems that further entrench racial, gender and other biases that harm marginalized groups and run counter to Canada’s values of human rights.

Fraser explained that you can ask a large language model how it arrived at an answer, and it will give you an explanation that sounds reasonable – but is not necessarily related to the underlying mechanisms and processes that led it to that answer.

“Sometimes the patterns are because of real knowledge or information that’s there, and sometimes the patterns are because of systemic historical biases, and sometimes the patterns are just random,” she said.

Many are also concerned about the concentration of wealth in the hands of the few who build and control new AI technologies, rather than distributing the profits — and benefits — more equitably throughout society.

“It’s really important that as this technology sees greater use…we all benefit from it,” Kelly said. “That we have more leisure time and more comfortable lives, rather than AI just being used as a tool for extracting more value from fewer labourers and lining the pockets of the very wealthy.”

Although many are concerned about the risks of AI in the future, Fraser said that when considering policy and governance, we need to remember to focus on the issues that are already here.

“One of the biggest misconceptions that I see around AI is this idea that all the risks of AI might happen at some point in the distant future,” she said.

“I think the risks are here now.”

A woman wearing glasses holds her hand to her face while engaging in conversation.


2025 Ӱԭ Challenge Conference Recap

]]> Ericsson Exec Envisions ‘Self-Driving’ Autonomous Networks of AI ‘Friends’ /challengeconference/story/agentic-ai-elena-fersman/ Thu, 15 May 2025 14:35:09 +0000 /challengeconference/?post_type=cu-stories&p=1702 Elena Fersman embraces a scenario that some artificial intelligence (AI) pessimists would likely describe as a nightmare: self-automated telecommunication networks created by AI systems tapping each other for ideas.

Fersman, vice-president of Swedish-based telecom giant and head of the company’s Global AI Accelerator, closed this year’s Challenge Conference at Ӱԭ University by offering glimpses of the next phase of AI’s evolution.

Ericsson has been partnered with Ӱԭ for more than five years, noted Rafik Goubran, vice-president (Research and International), when introducing Fersman’s closing keynote. He praised the “collaborative effort to drive innovation, train skilled workers and build more reliable, secure technology for the future of 5G wireless communications.”

A woman wearing a green blazer speaks into a camera.

Ericsson vice-president Elena Fersman, head of the company’s Global AI Accelerator, delivered the closing keynote address at this year’s Ӱԭ Challenge Conference —The AI Summit – Navigating Disruption and Transformation.

He added that Ӱԭ has more than 100 researchers working on AI-related projects, contributing to a record-breaking year in which the university drew $113 million in sponsored research funding.

Fersman’s talk, AI and Telecom: Evolving Architectures and Operational Integration, focused on the integrating of AI into telecommunication networks. Fersman sees the future of this integration as having telecom networks fully self-automated through the use of AI.

“I don’t want to have language, as we know it,” said Fersman.

“I want to have very efficient and real-time communication between things.”

Many people have adopted conversational chatbots into their daily lives, she noted. These AI chatbots rely on certain rules and human input to generate content, but Fersman said she’s thinking well beyond this.

A woman delivers a presentation with PowerPoint on a stage.

The Rise of Agentic AI: Networks That Think Together

What she describes is agentic AI, an emerging technology in the rapidly evolving AI sphere.

While Fersman says she hears about agentic AI “every second word” in her own community, she was pleased to hear it being mentioned by others at the Ӱԭ conference.

Agentic’s emergence is revolutionizing the way we apply AI, according to Fersman. Rather than relying on human input like the traditional chatbots using Large Language Models, agentic AI is able to reason and orchestrate other AI agents – creating its own workforce.

“When one is triggered and it doesn’t know completely how to address the task, it can trigger a friend,” explains Fersman.

“It can ask five friends. Together, they will automatically build a workflow, and they will continue addressing the whole task. One agent can have partial knowledge about solving the task, and they can talk with each other and optimize their knowledge. The master of those agents — so the oldest brains in the brain — may learn about which agent is performing better, which one is performing worse.”

A man and woman laughing while having a conversation.

Fersman speaks with Ӱԭ University Industry and Partnership Services (IPS) director Chris Lannon

Agentic AI can learn and adapt in real time, without human interference, she said.

“Yes, humans in the loop, we need to allow for that, but I’m a strong believer in fully autonomous networks,” she said.

“It needs to be able to run completely in a self-driving mode. And in some cases, as we already discussed here, you will not be able to explain the decisions. Because it happens — either it’s a too-big search space, or it’s a too-small real-time loop.”

Automation using AI is one of the contributors to cost reductions within companies, according to Fersman.

She gave the example of a company of 45,000 workers deploying AI. The automation of that department reduces the need to 7,000 people instead of the original 45,000. However, Fersman emphasizes that these workers are not getting fired, but are instead being reallocated to different jobs within the company, optimizing the workforce rather than decreasing it.

The costs of developing AI software is also decreasing, she said.

In the past, AI models have cost millions upon millions to train and implement. In December, US$24 million was spent on the OpenAI o1 model, but a few weeks later Deepseek R1 was deployed at a fraction of that expense, costing just US$880,000.

“Within weeks from their release, the researchers at Stanford and Berkeley are coming out with models of similar precision that were trained for less than $50,” says Fersman.

Efficiency, Emissions, and Ethical Tradeoffs

Like several speakers at this year’s Challenge Conference, Fersman acknowledged certain drawbacks of AI — for example, the high CO2 emissions caused by the huge energy appetites of ChatGPT and other generative AI models.

Fersman said she tries to make her searches using AI worth the pricey cost of CO2 emissions, but notes that she finds herself looking at pictures of cats on occasion, too — a potentially less worthy search.

This does not deter Fersman from her research, however, as she weighs the potential benefits of AI with its negative impacts.

“The important thing,” she said, “is that when you ask the question about the revenue investment of every search, are we winning something from there? Yeah, probably, in many, many cases, we will be winning something for sure. If it comes to, for example, a more reliable method that predicts any failure — and you can prevent a car crash. That’s a good case.”

A sign on a table that reads Sponsor Table Ericsson.


2025 Ӱԭ Challenge Conference Recap

]]> Balance Opportunity and Responsibility in Unleashing AI’s Power /challengeconference/story/ai-balance-opportunity-responsibility/ Thu, 15 May 2025 14:26:50 +0000 /challengeconference/?post_type=cu-stories&p=1683 Artificial intelligence (AI) has the power to transform the way business is conducted and how public services are delivered, a group of panelists at the forefront of the AI revolution said at Ӱԭ’s annual Challenge Conference.

The conversation about “catching the wave of a digital tsunami” — moderated by Allan Thompson, director of Ӱԭ’s School of Journalism and Communication — brought together Danielle Manley, director of the university’s new School of Nursing, Ӱԭ AI researcher Majid Komeili, Sonya Shorey, president and CEO of the economic development agency Invest Ottawa, and Julien Kathiresan, director of finance with Ottawa-based venture capital firm Mistral Venture Partners.

Whether AI sparks excitement, worry, curiosity or all of the above, one simple fact is true, said Shorey: “With great opportunity comes great responsibility.”

Five people sitting at a table, taking part in a panel discussion with an audience in the foreground.

Moderator Allan Thompson, director of Ӱԭ’s journalism school (right), leads a discussion titled “Catching the Wave of a Digital Tsunami” with panelists (left to right) Ӱԭ computer scientist Majid Komeili, director of the university’s Intelligent Machines Lab; Danielle Manley, director of Ӱԭ’s new School of Nursing; Julien Kathiresan, director of finance with Mistral Venture Partners, and Sonya Shorey, president and CEO of Invest Ottawa.

Shorey spoke passionately about the responsibility of integrating AI in a way that is “equitable, safe and reliable,” ensuring all members of society benefit from the technology’s transformative power. It’s an approach she said is necessary to ensure no Canadians are left behind in the wake of AI innovations.

Kathiresan highlighted the importance of true innovation, emphasizing the distinction between businesses that are genuinely harnessing AI to create value and what he termed an “AI wrapper company” — a firm that’s only superficially exploiting AI “hype” to rebrand itself without any innovative thought.

Two people shaking hands, while one of the people shaking hands speaks with a third person.

Unlocking something better

The excitement around AI, he insisted, lies in “companies trying to unlock something better.”

Manley said AI applications offer great potential for improving the delivery of healthcare. She gave the example of tools that offer “ambient listening” capture of conversations between patients and medical practitioners to reduce the crushing paperwork burden faced by many healthcare professionals. These audio recording, transcribing and notetaking features can free healthcare providers to focus more on patient care and less on filing documentation.

“In healthcare,” said Manley, rather than money, “time is the ultimate currency.”

In turn, AI tools can be used to empower patients to get more involved in their “health care journey,” helping them understand medical terms procedures and care options.

The potential capability of artificial intelligence tools in the field “blows my mind,” said Manley. “I’m excited about AI.”

Komeili, a computer scientist and director of Ӱԭ’s , has worked with the City of Ottawa to demonstrate how AI can be used to predict someone’s risk of chronic homelessness and plan for effective intervention strategies. He explained that chronic homelessness is defined as someone staying in a shelter for at least 180 days within the past year.

Four people sitting at a rectangular table, working on laptops.

Students working in the Intelligent Machines Lab

He emphasized the importance of a balanced conversation in society about “AI innovations and opportunities, as well as risks and AI safety.”

Kathiresan said he was particularly excited about AI’s power as a “multi-modal system” that can combine text, audio, video and various forms of data into a “coherent” whole.

“To me that’s fascinating — and in many ways, pretty human-like.”

Shorey said AI can help businesses generate elaborate marketing strategies “that normally would have taken weeks or months” but are “now being produced in minutes.” She said AI makes it possible for companies who use the technology effectively to have “teams of 10 that can act like teams of 100.”

But that raises a question about “what happens to the other 90 people?” she acknowledged.

Shorey noted that AI is changing the employment landscape, with some jobs going and new ones being created. For her, it’s about seeing how AI can work to improve the job market in a way that people can still do what they love.

“How do we make sure the right people get the right opportunities?”

Three people wearing business suits engaged in conversation at an event.


2025 Ӱԭ Challenge Conference Recap

]]> Harnessing the Power of AI for the Greater Good /challengeconference/story/canada-ai-harnessing-power/ Thu, 15 May 2025 13:55:20 +0000 /challengeconference/?post_type=cu-stories&p=1654 While it may still seem to some like futuristic technology, artificial intelligence (AI) is already having impacts on almost every facet of Canadian life in real time.

That was a central theme at Ӱԭ University’s third annual Challenge Conference, where leaders in academia, business, government and the community gathered to explore how society is navigating the disruption and transformation spurred by rapid advances in AI.

A man in a light grey suit delivers a speech from a podium about AI in Canada.

University President Wisdom Tettey opened the third annual Ӱԭ Challenge Conference by highlighting both the “trepidation” many feel about the advances in artificial intelligence and the immense possibilities presented by the technology.

As if on cue, the conference was under way at the very moment when it was announced that Prime Minister Mark Carney — who was unveiling the new federal cabinet at Rideau Hall — had appointed the country’s first Minister of Artificial Intelligence and Digital Innovation, former CBC and CTV broadcaster Evan Solomon.

Meanwhile, at the front of a packed room in Ӱԭ’s Richcraft Hall, University President Wisdom Tettey opened the conference —sponsored by the Ottawa-based Danbe Foundation and , the Swedish-based technology company — by acknowledging that AI brings certain levels of risk, but also immense opportunity.

“When they hear the word AI, it just evokes in people different kinds of emotions,” said Tettey.

“And while there may be risk with that, the potentials are something that excites people about looking at the possibilities that attenuate the impact, that reduces the risks and allows us to work together in common purpose.”

This is the third annual Challenge Conference hosted by Ӱԭ. The 2024 edition focused on climate change solutions and featured key leaders and experts from business, government and academia. The inaugural conference in 2023 offered informative and thought-provoking conversations about the world’s mental health crisis, a societal challenge brought into sharp focus as nations grappled with the COVID-19 pandemic.

A red and white sign with the text Ӱԭ Challenge Conference.

Leveraging the power of AI

In his welcoming remarks, Tettey described several university initiatives that are already leveraging the power of AI for the greater good. He cited the work being done by Ӱԭ Computer Science Professor Majid Komeili — a conference panelist — who has been engaged in a high-profile research project with the City of Ottawa on how to use machine learning to predict an individual’s risk of chronic homelessness. It’s a real-world application of AI aimed at helping local government support early and effective intervention in one of Canada’s most pressing social challenges.

Adegboyega Ojo, the conference’s keynote speaker, is a professor at Ӱԭ’s School of Public Policy and Administration and the Canada Research Chair in Governance and Artificial Intelligence. Ojo described AI as a tool with “the power to fundamentally transform economic and social structures across a wide range of industries.” He equated the transformative power of AI to such historic technological innovations as the discovery of electricity, the invention of the steam engine and the rise of the Internet.

A man with a blue suit delivers a speech from a podium relating to Canada and AI.

Ӱԭ Professor Adegboyega Ojo, Challenge Conference keynote speaker and Canada Research Chair in Governance and AI, highlighted the “AI Paradox” facing Canada as it attempts to translate its strong research profile on AI into economic and social benefits.

Naturally, said Ojo, an instrument with that level of potential influence has governments scrambling to determine the best way to both harness and leverage AI’s power. He outlined the current global landscape around AI, noting that international organizations such as the Organization for Economic Co-operation and Development, European Union, United Nations and African Union have all established frameworks promoting responsible deployment of AI technology.

Even so, Ojo’s presentation included a warning: “Global efforts in responsible AI significantly lag behind rapid AI adoption, leaving critical gaps unaddressed in safeguarding human rights and protecting vulnerable groups.”

To highlight the risks, he pointed out that in just a two-day period this month — May 5 and May 6 — the OECD AI observatory flagged a host of potential safety and ethical concerns, including an official White House account posting an AI-generated image of U.S. President Donald Trump, a WhatsApp voice scam causing major financial losses, and a doctor prescribing the wrong drug dosage based on AI-generated advice.

Ojo also noted that there are clear regional leaders in AI development, with the United States charging ahead of Canada and other countries in AI readiness and model releases, and East Asia dominating research output.

An overhead view of a room full of round tables, with red tablecloth and groups of people sitting around each table.

The AI paradox facing Canada

This led to what he described as the “AI paradox” facing Canada. According to Ojo’s research across multiple AI indexes, Canada is a global leader in responsible AI and is home to a world-class research and innovation ecosystem. However, the country’s AI leadership has yet to meaningfully translate to social and economic impact at the rate expected.

He highlighted relatively low levels of private and public investment in artificial intelligence infrastructure in Canada compared to global AI leaders.

“The infrastructure gap is very clear,” said Ojo.

One of the reasons for this, he explained, appears to be low levels of public trust. Fewer than 50 per cent of respondents to a 2024 survey agreed with the statement that products and services using AI have more benefits than drawbacks.

Ojo said boosting levels of public trust in AI is an essential part of the task ahead for the Canadian government.

“If public trust is not developed, there is always going to be low demand,” said Ojo. “So even though we have very strong, responsible innovation framework, we must start to really integrate that trust.”

Overall, Ojo expressed approval of the Canadian government’s current AI plan, noting that it aims to address pressing issues such as low-scale AI investment, and the brain drain that results from a lack of dedicated initiatives to “retain domestic AI talent or attract back expatriates.”

He emphasized to those in attendance that now is the time to act, calling the need to responsibly seize upon the economic and social opportunities presented by artificial intelligence a “carpe diem” moment for Canada.

“It has to be now,” said Ojo. “We just have to do it.”


2025 Ӱԭ Challenge Conference Recap

]]>