Artificial Intelligence casts a much bigger shadow than any other potentially disruptive innovation in the early 21st century. In order to make sure that the next big leap for humanity makes a safe landing and does not come at the expense of democratic principles and civil rights, politicians all over the world are making their first advances on writing legislation that serves as a backstop against future harms.
Of course, the task of setting a legal frame for the development and implementation of AI tools is extremely complex, as it implies foreseeing both potential applications and possible misuses. Even though we can’t be totally sure about how much of the current surge of artificial intelligence tools is only an artificial hype, all major companies in the world are chasing the proverbial goose that may lay unparalleled golden eggs.
According to Goldman Sachs, the amount invested worldwide in generative AI alone will go near 200 billion dollars by 2025. And that’s despite it results in no actual, measurable effects in productivity. Early adoption is a must, and thus, so is early regulation.
AI Laws: A Crucial Policy in the Making
On August 2023, a piece of legislation called Interim Measures for the Management of Generative Artificial Intelligence Services came into force in China. It outlines restrictions on companies that provide such services to consumers. These restrictions apply to both the training data used and the outputs produced.
Also, last year, the White House Office of Science and Technology Policy issued a Blueprint for an AI Bill of Rights, which acknowledges the need of preventing technological development to abound of racial, gender and other social biases, systematically violate privacy and copyrights or avert the citizens’ access to critical resources.
The Concern on Biometrics
By the end of 2023, the European Council and the EU Parliament reached an agreement on a European Artificial Intelligence Act too. Similar to legal frameworks in development in other countries and political stances, it champions for establishing safeguards, and also intends to protect the right of the consumers and the public in general to launch complaints and receive meaningful explanations from the service providers without the need of actual legal action.
But the EU AI Act takes a step beyond and also prohibits the following specific use cases:
- Biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race).
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
- Emotion recognition in the workplace and educational institutions.
Investing on Artificial Intelligence in Security
According to a recent study, the AI Security Market size is estimated to be at 25 billion dollars in 2024, and is expected to reach 60 billion by 2029, growing at a compound annual rate of 19% during that period. But this investment will necessarily have to take into account the new legal frames arising all over the world.
Manufacturers should be mindful when it comes to offer additional functions allowed by the same technology they are using to improve the efficiency of their security systems. At least, until the legal frame which is now in the making provides a clearer picture of what is and is not compliant with the law regarding the use of AI.