EthicsTalk LIVE on AI
In today’s world, AI is no longer a futuristic concept—it’s here and accessible to everyone. But with great power comes great responsibility. The question many organisations face is: How do we use AI responsibly?
This blogpost summarises the key takeaways from the EthicsTalk LIVE on AI on October 8th, 2024. The panelists included Antti Merilehto, a leading expert in AI ethics, and Sami Ahma-Aho, AI Architecture Lead at Neste. More information on the panelists on the event page.
Back to Basics: Values First
When it comes to implementing AI, some are already there and some are still thinking about it. Antti wants to encourage everyone to test and play around with AI, so limitations and capabilities become familiar. Everyone has access to these tools now; the real mistake would be ignoring them and not putting in the effort to learn how to use them ethically and effectively. Antti highlights that the time to learn is now. On the same note he emphasises that also the management teams should learn to use it, as this is not something that the IT department will take care of its own.
“It’s okay not to know, but it’s not okay not to use.”
-Antti Merilehto
Sami reminds that, in order to set the foundation for ethical use of AI, companies need to take a step back and look at their values. Ethical AI usage doesn’t come from technology itself; it’s rooted in the organisation’s values. When ethics and responsibility are core values, then AI development and deployment should naturally follow naturally.
The “Wild West” approach of unregulated AI development—where innovation happens without rules or structure—is not only dangerous but irresponsible. Governance is essential and without it, AI runs the risk of causing harm, perpetuating biases, or leading to misuse.
Responsibility is on all of us
It’s not just the responsibility of leadership to ensure ethical AI practices. Developers, the people who are hands-on in creating AI tools, must also be trained to think about the ethical implications of their work. AI doesn’t operate in isolation; it’s shaped by the values and biases of those who build it. Without considering the ethical consequences, we risk embedding systemic issues into the very algorithms designed to help us.
It’s crucial to remember that AI is a tool, and tools are different from people. People use AI, and with that comes a responsibility for understanding the limitations and potential pitfalls. Training employees in ethical AI use should be a company’s priority. After all, people need clear guidance to navigate these new technologies effectively. Antti notes that these tools are not even two years old.
One key point to recognize is that AI’s answers are shaped by past data and often reflect perspectives that are predominantly Western or Anglo-American, and male. It’s essential to be aware of these biases when using AI, particularly in decision-making contexts. AI doesn’t exist in a vacuum—it reflects the worldviews and limitations of those who create it.
Another key point is to realise that AI produces different answers for different people, even to the same question. This happens because AI systems are influenced by the specific data sets they’ve been trained on, and those data sets vary across regions, industries, and even social groups. This inconsistency can be confusing and frustrating, but it also highlights the importance of recognizing AI’s limitations. As organisations and individuals, we must understand that AI offers insights, not universal truths.
The fear of AI
When organisations establish clear guidelines and policies on which AI tools are allowed and how to use them, there is no need to fear AI. In practice, AI should be treated as a conversation partner, not a final authority. If you’re using AI to help with decision-making, it’s worth mentioning that AI was involved. Antti recommends stating on the companies’ websites that “We use modern AI tools in our communications and marketing.” Transparency fosters trust, especially in situations where AI may have influenced a recommendation or conclusion.
Primarily we should fear that we do not understand it and that we do not utillize it.
– Sami Ahma-aho
How to get started
As AI becomes an integral part of our daily lives, organisations need to ask themselves some fundamental questions:
- Are we serious about AI? – If the answer is no, it might be time to rethink your position in the market. AI is becoming a crucial component of competitive advantage, and if your company isn’t prepared to embrace it, others will.
- How are the expectations of our clients changing? – Client needs and expectations are shifting rapidly, and AI is playing a significant role in that transformation. Are you ready to be part of this change, or will you allow competitors to capitalize on the benefits of AI while you fall behind?
- Do ethics&compliance, legal&governance, and IT work together on AI? – AI isn’t just a tech issue—it touches every corner of the business. Ensuring that these departments come together to establish clear rules and expectations around AI is crucial. Neglecting this step is a recipe for legal and ethical issues down the road.
In conclusion, ethical AI use isn’t just about following rules; it’s about aligning technology with your company’s values, training your teams to think critically about how they use it, and ensuring that all departments are working together toward the same goals. When done right, AI can be a transformative tool that drives progress while upholding the standards we all strive to meet.
Link to the Harvard business school article Antti recommends to everyone can be found here.
This blogpost was obviously created with the help of modern AI tools.
If you want more of the same topic, make sure you watch the webinar recording from Nordic Business Ethics’ YouTube channel.