By Jorge Pascual, Corporate Lawyer at Techsoulogy
As the field of Artificial Intelligence (AI) continues to rapidly evolve, it has become a topic of intense discussion and debate regarding the outbreak of these technologies into the mainstream market, such as, for example, ChatGPT.
However, the development of Artificial Intelligence solutions has been in the scope of developers for a long time now. In fact, the use of Artificial Intelligence in the field of advertising has raised ethical concerns about privacy, discrimination, and the potential for harmful effects on individuals and society. The European Commission has recognized the need for a regulatory framework to ensure that Artificial Intelligence systems are developed and used in a responsible and ethical way and, after a first draft of an Artificial Intelligence Directive in 2021, is near passing the final text.
The European Commission’s main concerns around IA
The draft writing was published in April 2021 and has since been the subject of debate and discussion among various stakeholders, including industry representatives, civil society organizations, and policymakers. However, it has not been passed yet (expect the final wording around mid-end 2023).
The draft has been criticized by some for being too prescriptive and for potentially stifling innovation in the Artificial Intelligence field. Others have welcomed this draft as a much-needed step towards ensuring that Artificial Intelligence is developed and used in a responsible and ethical way.
One of the key ethical implications of the use of Artificial Intelligence in digital advertising is the potential for discrimination.
Artificial Intelligence systems can learn to make decisions based on patterns in data, and this can lead to biased or unfair outcomes. For example, an Artificial Intelligence system used in advertising might learn to target ads to certain demographic groups based on factors such as age, gender, or race. This could lead to discriminatory outcomes and reinforce existing inequalities.
Another ethical concern is the potential for Artificial Intelligence systems to manipulate individuals and exploit their vulnerabilities. Artificial Intelligence systems can be designed to create highly personalized and persuasive ads that are tailored to an individual’s preferences and interests. This can make it difficult for individuals to make informed decisions and can lead to harmful outcomes such as addiction or financial harm.
Transparency, privacy, and other European Directives on AI
The European Directive on Artificial Intelligence seeks to address these ethical concerns by establishing clear rules and standards for the development and use of Artificial Intelligence systems in digital advertising. The directive requires transparency and explainability, high standards on data protection and privacy, and prohibits the use of Artificial Intelligence systems that are designed to manipulate or exploit individuals.
One of the key legal implications of the directive to tackle these ethical challenges is the requirement for transparency and explainability. These two principles will translate into companies that use Artificial Intelligence in digital advertising providing clear and understandable information about how their Artificial Intelligence systems work, and how they make decisions. This means that they must be able to explain the algorithms they use, the data they collect, and the criteria they use to target and personalize ads.
In addition, the directive requires companies to comply with data protection laws, such as the General Data Protection Regulation (GDPR). Companies must ensure that they collect and process personal data in a lawful and transparent manner and that individuals have the right to access and control their data. This means that companies must obtain explicit consent from individuals before collecting and using their personal data and must provide clear information about how the data will be used.
The directive also prohibits the use of Artificial Intelligence systems that are designed to manipulate or exploit individuals.
This includes the use of Artificial Intelligence to create ads that are misleading, discriminatory, or harmful. Companies must ensure that their ads do not promote illegal activities, hate speech, or other harmful content.
Companies that fail to comply with the directive could face fines and legal action. For enforcement, the directive empowers national authorities to impose sanctions on companies that violate any rule or mandate established in the directive. Not to mention the reputational damage that may cause non-compliance with these legal requirements, as consumers are increasingly concerned about the ethical and responsible use of Artificial Intelligence in advertising.
As the use of Artificial Intelligence in digital advertising continues to grow, it is more important than ever for companies to embrace ethical and responsible practices. The directive provides a clear roadmap for companies to develop and use Artificial Intelligence systems in a way that is respectful of individuals and society, as another step in Brussels’ effort towards a fair and transparent Internet environment. By prioritizing transparency, the safeguarding of users’ online identity, and the ethical use of Artificial Intelligence, companies can build trust and confidence with consumers, while also staying on the right side of the law.
Embracing these principles is not only the right thing to do, but it is also good for business. By taking a proactive and responsible approach to the development of Artificial Intelligence solutions, companies can create campaigns that resonate with consumers, build lasting relationships, and ultimately drive success. Let’s work together to embrace the opportunities of Artificial Intelligence while ensuring that it is used in a way that is ethical, responsible, and equally beneficial for all.