.@gemmagaldon: “It’s very easy to train an algorithm to not discriminate against women, but someone needs to do it.”
#machinelearning #ArtificialIntelligence #BigData #CyberSecurity #gdpr #biometrics #Apps #DIGITALSMELive

(Audio edited for length and clarity)

Speakers

George Sharkov, Adviser to the Minister of Defense and Director at European Software
Institute CEE

– Josianne Cutajar, Member of European Parliament

Gemma Galdon Clavell, Founder and CEO of Eticas Research and Consulting

Moderator

Sebastiano Toffaletti, Secretary-General at DIGITAL SME

Conversation

Artificial intelligence (AI) is a fast-developing technology that has the potential to fundamentally impact our lives. Deployed in multiple fields in order to increase efficiency, AI can improve many aspects of our lives but at the same time, it constitutes a risk.

AI can improve healthcare or make farming and production systems more efficient. On the other hand, autonomous AI systems often lacks explainability and transparency.

As AI becomes ever more a part of every aspect of people’s lives, trustworthiness plays a central role. Back in 2018, the European Commission shared its vision for AI, which supports “ethical, secure and cutting-edge AI made in Europe.”1

This vision got more concrete last February, when the Commission published its White Paper on Artificial Intelligence, promoting the uptake of AI and of addressing the risks associated with certain uses of this new. Ultimately, the High-Level Expert Group on Artificial Intelligence (AI HLEG), published in April “Ethics Guidelines for Trustworthy AI”.

AI HLEG’s Guidelines define three components for trustworthy AI: 2

  1. Lawful, complying with all applicable laws and regulations.
  2. Ethical, ensuring adherence to ethical principles and values.
  3. Robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.

Therefore, prioritizing trustworthy AI systems creates sustainable technology. Moreover, it is crucial to ensure a good balance between ethics and innovation, making sure that the existence of one does not damage the other one.

Finding a method for integrating ethics into the design of AI has become a main goal of research over the last years. Approaches towards moral decision making generally fall into two camps, ‘top-down’ and ‘bottom-up’ approaches. Top-down approaches involve explicitly programming moral rules and decisions into artificial agents. Bottom up approaches, on the other hand, means developing systems are able to distinguish between moral and immoral behaviours.

There are for sure several approaches to ethics. Robust ethical principles are essential in the future of this rapidly developing technology, but not all countries understand ethics in the same way. Countries like Germany, the UK, India, Singapore and Mexico, have decided to create AI ethics councils, while the matter is almost completely disregarded in Japan, South Korea and Taiwan.

What does ethical AI mean for SMEs? How can we make sure that trustworthy AI will be a competitive advantage for AI in Europe, rather than a hurdle to innovation?

1 See https://ec.europa.eu/transparency/regdoc/rep/1/2018/EN/COM-2018-795-3-EN-MAIN-PART-1.PDF

2 See https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

.@gemmagaldon: “Ethical tech makes us better. Approach ethics as a risk assessment exercise, which is a lot cheaper in the long run. We’re moving from principles to real practices” #AI #ML #bigdata #privacy