The EU's AI Act will have a significant impact on both individuals and industries using AI. Internal guidelines covering both the ethical and legal aspects of AI are needed, says Wenche Karlstad.
What kind of requirements should be placed on AI-generated content? Will tighter restrictions slow down the rapid pace of innovation? Where is the line between legislation, guidance and ethical judgement?
The EU regulation on artificial intelligence (the AI Act) was approved by the EU Parliament in mid-June. This means that within 1-2 years, the EU will have a law in place that will not only apply in EU countries, but will also set the standard for much of the world.
- I believe that the AI law will spread much faster than the EU's General Data Protection Regulation (GDPR). But they will coexist closely. Data protection will become even more central in the future.
This is according to Wenche Karlstad, head of digital sovereignty initiatives at Tietoevry. She is one of those who have followed the process closely over time.
Although there is still a round of negotiations with the European Commission and the Council of Ministers before the law is finalized, the draft says a lot about the future framework for the use of AI systems.
- This is the world's first AI regulation, and anyone developing or introducing AI applications in Europe will have to comply with it," Karlstad points out.
Professor Marija Slavkovik, head of the Department of Information and Media Science at the University of Bergen, also believes the AI Act will have a major impact.
- Even though many people have objections, we need to regulate the use of artificial intelligence. What we are seeing now is the beginning of a new era of automation.
Professor Marija Slavkovik from UiB.
Karlstad believes that we are only beginning to see the outlines of what AI can do for us. She cites a few applications:
The list of areas where AI will impact everyday life and work is endless.
- Ultimately, AI will be able to add value to society as a whole," says Karlstad.
The forthcoming EU regulation classifies AI applications according to their level of risk. The scale has four levels: unregulated, limited risk, high risk and unacceptable.
- Real-time social monitoring, for example, will be completely banned. The high-risk category entails a number of requirements, including registration, risk assessment and labelling.
"Anyone who is affected should prepare now for the upcoming legislation and what it means. Those who fail to do so may face sanctions," says Karlstad.
She emphasizes that the EU regulation is not only aimed at restricting AI. It also aims to encourage innovation.
- The EU wants to strengthen Europe's position in an area where the US and China dominate. In this respect, the AI law is a deliberate move by European Commission President Ursula von der Leyen.
Wenche Karlstad believes that there is a need for internal guidelines that address both the ethical and legal aspects of AI.
Establishing clear guidelines will create a more predictable environment.
For the EU, the desire to strengthen Europe's digital sovereignty is at the heart of the matter. In a broader perspective, it is about technological and economic growth as well as about safeguarding citizens' fundamental rights and values.
Credibility is a key word when it comes to using AI. You need to know where the data is and how it is secured. But as a consumer, you should also - as NTNU researcher and author of "Machines that think" Inga Strümke points out - gain insight into how the technology works.
What is "real" and what is AI-generated? What paths do data take in an increasingly intertwined value chain? When are our fundamental rights at risk?
- It's equally exciting when we move beyond the realm of regulation to ethical principles. Risk assessments and technical documentation are most important in the high-risk category, but everyone who uses AI has a responsibility," says Karlstad.
This responsibility may involve ethical dilemmas.
- For example, how to balance the desire to play with open cards against the need to protect business-critical information, unwanted discrimination in hiring processes or privacy concerns when using sensitive information.
Those who can explain their judgements and clarify accountability in the development and use of AI will have an advantage.
If you are wondering how data, cloud, and AI can be regulated in the future, individuals have different tolerance levels.
The concept of responsible AI has become increasingly relevant for anyone implementing AI technology.
"It's not just about attitude, but about establishing clear guidelines - something that Tietoevry has already done by implementing its principles of responsible AI," says Karlstad.
Karlstad believes that AI should always be evaluated from an ethical perspective. This requires internal control principles and risk assessments in addition to legislation.
Slavkovik at UiB believes that ethics become especially valuable as AI technology develops as rapidly as it does now.
"But individuals have different tolerance levels for the values they are willing to sacrifice. We need a policy to ease the burden of dealing with social dilemmas," she says.
"As a professor of artificial intelligence and head of the department that educates AI professionals, I take the aspect of 'integrated AI ethics' seriously, and we include it as part of our courses," she continues.
"Governmental Authorities are just getting started"
Self-regulation is particularly important in a field where a multitude of developers and providers use models created by technology giants like Microsoft and Google.
"The banking and finance sectors have been good at self-regulation. By defining an internal framework, one can create confidence and trust regardless of the regulations that may come on top," says Karlstad.
Karlstad believes that the biggest challenge is that small and medium-sized businesses do not have sufficient capacity to navigate complex regulations and then translate legal texts into business understanding.
"They may see changing laws and regulations as obstacles. But I welcome the AI law proposal. Both Nordic countries and most other authorities are just getting started in this area and many has started to present guidelines for responsible AI use in the public sector.
The technological development is happening at a pace that is hard to keep up with. The question is whether any AI legislation is doomed to be outdated before it comes into effect.
Regardless, organizations should be proactive by analysing and incorporating upcoming legislation into their strategies for data, cloud, and AI. A new report from Tietoevry provides insights from EU institutions, Nordic decision-makers, and security specialists on sovereignty, security, and sustainability.
Originally published in Norwegian on Digi.no.
Read more about how we are working with Responsible AI:
Our Strategic Foresight paper provides unparalleled insights from EU institutions, Nordic decision-makers, and security specialists on upcoming EU initiatives for sovereignty, security, and sustainability.
How will EU regulatory impact Responsible AI, Data and Cloud strategies?
Read now
Wenche is passionate about creating value for our customers and enabling growth with attractive service offerings. She has near twenty years of experience in the IT business with different roles within management and advisory, bringing new services to the market.
In her current role as Head of Strategic Differentiation Programs at Tietoevry Tech Services, she is leading a global team of experts and managers.