Algorithmic systems are making decisions about our lives and we usually do not even realize it. These systems were often implemented with the goal of reducing workload, as it happens with the application of ML systems in healthcare, speeding up processes or eliminating human bias. But artificial intelligence (AI), far from not containing biases, prolongs those that are present in the society that develops them. What takes us to the question, is AI regulation happening?
Algorithms are trained with large amounts of data (Big Data) in order to be capable of making automated decisions. This data is usually biased, out of date, or even illegal. And, further than that, the collection of this information in many cases supposes a violation of the privacy of the users.
Bearing in mind that these systems make decisions about people’s lives and that the information they use is not always reliable, it is to be expected the result is that they often discriminate against the most vulnerable collectives. It could be assumed that they are reviewed before being implemented, but the reality is still far from that. Often these systems come into use without having undergone any ethical audit to ensure that they are fit for use. As a consequence, it is even common to refer to algorithms as black boxes of computer code and data due to the great lack of transparency about them.
On this subject, back in 2015 Spain established the law 40/2015 that refers to the need to audit automated systems that are in use by public administrations. This is a requirement that should be extended to all algorithmic systems with social impact. The Algorithmic Accountability Act of 2022 from the US, in turn, requires companies to assess the impacts of the automated systems they use and sell, creates new transparency about when and how automated systems are used, and empowers consumers to make informed choices about the automation of critical decisions. But these audits and assess requirements are something that is still far from being applied in a generalized way to all those algorithms with social impact
Privacy protection as a first step
In their efforts to safeguard integrity and basic democratic principles, institutions have created regulations to move towards a fairer AI.
In terms of data protection, in 2018 the European GDPR came into force. Since then, when entities are collecting data, they must inform users that their personal data will be collected and for what purpose and they also have to request their consent in this regard. Through it, entities like Amazon, WhatsApp have been fined hundreds of millions of euros.
Different states of the United States have followed these steps later. This is the case with the California Consumer Privacy Act of 2018, Virginia Consumer Data Protection Act and the Colorado Privacy Act. These bills establish privacy protections as the consumer rights to access, correct, delete, and obtain a copy of personal information, to know what personal information is sold and to whom, as well as responsibilities and privacy protection standards for data controllers and processors.
Towards an accountable AI
Data protection was a first step towards an ethical technology and now it seems that the commitment is taking on a greater dimension, since both Europe and the United States are considering legislation on artificial intelligence -AI regulation- more broadly to guarantee its transparency, explainability and accountability.
AI Act proposal: In the European case, this happens through this act which was presented in 2021 and establish transparency obligations for potentially deceptive AI systems, the need to designate National Supervisory Authority to act as market surveillance authority and fines for non-compliance up to EUR 30,000.000 or, if the offender is a company, up to 6 % of its total worldwide annual turnover for the preceding financial year, among others.
Far from seeing this as an obstacle for innovation, as some have been defending, this means a huge step that should have been crucial since the first moment.
Digital Services Act Package: The European Commission made the proposals in December 2020 and on 25 March 2022 a political agreement was reached on the Digital Markets Act, and on 23 April 2022 on the Digital Services Act. They aim to create a safer digital space in which the fundamental rights of all users of digital services are protected and to establish a level playing field to foster innovation, growth, and competitiveness, both in the European Single Market and globally.
Some states in the country have also presented their own AI regulation on those terms. One of the most striking and innovative is the bill, House, No. 142 which includes the creation of the Massachusetts Data Accountability and Transparency Agency.
All of this reflects how institutions and lawmakers worldwide are acting in the same direction to strengthen the protection of citizens against new technologies. This will mean a paradigm shift when it comes to designing AI systems. It remains to be seen if it means finally achieving the algorithmic transparency, explainability and accountability that we have always pursued at Eticas.