top of page

Regulation of Artificial Intelligence in Brazil…

  • Writer: Andressa Siqueira
    Andressa Siqueira
  • Jul 10, 2023
  • 7 min read

Person typing and a robotic hand coming out of the notebook
Image taken from the site Img1

Introduction


The use of Artificial Intelligence (AI) has gained more and more space within the technological solutions available on the market, to the point that we use AI to almost everyone at some level and we don't even realize it. AI is in our works (spam checking programs, predictive market analysis, among others), in our cell phones (in our spell checker, for example), in our computers (in Microsoft programs...), in our home (in our Alexa, Siri or Google Home)... Have you noticed that?


But how do these AI's work? What data is used in the analyses? What data has more or less weight in the result obtained? These questions often go unanswered, making AI systems a black box for most people.


Photo of Senator Rodrigo Pacheco
Senator Rodrigo Pacheco

Many people may not know that on May 3, 2023, the President of the Senate, Rodrigo Pacheco, presented a new bill (PL 2.338; 2023) to regulate Artificial Intelligence (AI) in our Tupiniquin lands, as well known as Brazil!


But what are the points of this PL? What challenges can this PL bring? What benefits are present in this PL? That's what we're going to talk about in this article! So will you come with me?


The Purpose of the PL


The text of PL 2.338/2023 was prepared with the aim of being a complete text on the subject of AI based on meetings held by a Commission of Jurists started last year that had more than 70 public hearings and brings together 3 others PL that started the procedures in February 2022 in the Federal Senate. These 3 PL are:


  • PL 5.051/2019 that talks about the use of AI in Brazil

  • PL 21/2020 that presents fundamentals, principles and guidelines for the development and application of AI

  • PL 872/2021 dealing with ethical issues and guidelines for the use of AI

Project 2338/2023 aims to:


This Law establishes general norms of a national nature for the development, implementation and responsible use of artificial intelligence (AI) systems in Brazil, aiming to protect fundamental rights and guarantee the implementation of safe and reliable systems, for the benefit of the individual human rights, the democratic regime and scientific and technological development." [1]


We can see two main approaches to PL:

On the one hand, it establishes rights to protect people who are impacted daily by artificial intelligence systems, from recommending content and directing Internet advertising to analyzing their eligibility for credit and certain public policies. On the other hand, an institutional inspection and supervision arrangement creates predictability conditions regarding its interpretation and, ultimately, legal certainty for innovation and technological development" [2]

Our rights


The PL establishes the following rights for those who use it in art. 5


1 - Right to prior information regarding your interactions with artificial intelligence systems;

Companies should provide general description of the AI, explain in an understandable way how their AI works, what interactions it makes, where it gets data from and how that data is used.


2 - Right to an explanation about the decision, recommendation or forecast made by artificial intelligence systems;

Companies must provide in a simple way how AI works, that is, what are the rules, the line of "thinking" that AI can follow to reach the promised results.


3 - Right to contest decisions or predictions of artificial intelligence systems that produce legal effects or that significantly impact the interests of the affected;

Companies should provide a way for users to challenge AI results that affect them, whether legally or in self-interest.


4 - Right to determination and human participation in decisions on artificial intelligence systems, taking into account the context and the state of the art of technological development;

If the user requests the intervention of a human being in the operation of an AI, this must be provided by the company.


5 - Right to non-discrimination and correction of direct, indirect, illegal or abusive discriminatory biases; and

Companies must always be aware that their AI does not discriminate and correct any perceived bias.


6 - Right to privacy and protection of personal data, under the terms of the relevant legislation

Here comes the LGPD…


Duties of AI creators


One of the main points addressed by the PL is the need for a preliminary assessment to classify the risk of implementing that AI. In addition, it is necessary for creators to establish governance structures and internal processes able to guarantee the security of systems and service to users. In cases of systems considered to be high risk, the competent authority will be notified about the system in addition to some specific internal measures and processes.


The text is based on an approach based on the risk that the tool can offer the citizen. The proposal brings a governance that the greater the risk in the development and use of AI, the greater the obligations and responsibilities of both those who develop and who applies." (Fabrício da Mota Alves, Coordinator of the Digital Law and Data Protection area at Serur Advogados) [4]

When a system is considered High Risk


Any AI used that has the following purposes is considered to be at risk:

  1. application as security devices in the management and operation of critical infrastructures;

  2. professional education and training, including systems for determining access to educational institutions;

  3. recruitment, screening, filtering, evaluating candidates, making decisions about promotions or terminations of contractual work relationships, task sharing and control and evaluation of performance and people's behavior

  4. evaluation of criteria for access, eligibility, concession, review, reduction or revocation of private and public services that are considered essential;

  5. evaluation of the debt capacity of natural persons or establishment of their credit rating;

  6. dispatch or prioritization of emergency response services;

  7. administration of justice;

  8. applications in the health area;

  9. biometric identification systems;

  10. criminal investigation and public safety;

  11. analytical study of crimes relating to natural persons;

  12. investigation by administrative authorities to assess the credibility of evidence in the course of investigating or prosecuting offences;

  13. migration management and border control


What breeders shouldn't do


Another point is that the implementation and use that fall into one of the following cases, present in art. 14:


  1. that employ subliminal techniques that have the purpose or effect of inducing the natural person to behave in a way that is harmful or dangerous to their health or safety, or against the foundations of this Law;

  2. that exploit any vulnerabilities of specific groups of natural persons, such as those associated with their age or physical or mental disability, in order to induce them to behave in a manner that is harmful to their health or safety or against the foundations of this Law ;

  3. by the government, to evaluate, classify or rank natural persons, based on their social behavior or personality attributes, through universal scoring, for access to goods and services and public policies, in an illegitimate way or disproportionate.

In cases of serious incidents, creators must report to the competent authority the occurrence of security incidents, interruption of critical infrastructure operations, serious damage to property or the environment and violations of fundamental rights.


Punishments


PL establishes in its art. 27 that "The supplier or operator of an artificial intelligence system that causes property, moral, individual or collective damage is obliged to fully repair it, regardless of the degree of autonomy of the system. " [1]


In addition, if creators commit infringements, they may suffer the following sanctions

  1. warning;

  2. simple fine, limited, in total, to BRL 50,000,000.00 (fifty million reais) per infraction, being, in the case of a legal entity governed by private law, up to 2% (two percent) of its invoicing of your group or conglomerate in Brazil in your last financial year, excluding taxes;

  3. publication of the infraction after its occurrence has been duly investigated and confirmed;

  4. prohibition or restriction to participate in the regulatory sandbox regime provided for in this Law, for up to five years;

  5. partial or total, temporary or definitive suspension of the development, supply or operation of the artificial intelligence system;

  6. Prohibition of processing certain databases.


Important notes


The PL makes it clear that nothing can be placed on the market without the sanction of the inspection body to be defined, which will carry out tests before releasing the registration for system operation.


Conclusion


The cost for companies to be able to comply with all the requirements present in the law is very high and this can become a disincentive for the development of technology in the country or it will increase the concentration that already exists in the market, since only large companies will be able to bear the necessary measures if this Bill is approved.


Even PL bringing advances in issues such as risk mitigation and bias in algorithms, accountability, accountability on the part of AI creators, making companies take more care so that their AI's do not violate the laws, it has some weaknesses about the understanding of responsibility of the creators.


If an ambulance engine fails, does the manufacturer respond for the patient's death? And if a document written in Word incites riots, or racism or offenses, Microsoft Is it co-responsible? In the case of AI it is the same thing. There is a limit to the fraud and control that technology creators can exercise" (Luiz Lobo, founder of digital technology startup Fintalk) [5] < /p>

It is important to understand that there are not only downsides to PL implementation. This Bill brings some very positive points such as risk mitigation and bias in the algorithm, accountability by the system creators


The proposal creates a statute of user rights, which allows those affected by the technology to contest decisions made based on artificial intelligence. It also gives the right to explainability, depending on the situation." (Fabrício da Mota Alves) [6]

References

[1] PL 2338/2023 - Senado Federal. Disponível em: <https://www25.senado.leg.br/web/atividade/materias/-/materia/157233>.

[2] Rodrigo Pacheco apresenta projeto de lei para regular o uso da inteligência artificial no Brasil. Disponível em: <https://exame.com/brasil/rodrigo-pacheco-apresenta-projeto-de-lei-para-regular-o-uso-da-inteligencia-artificial-no-brasil/>. Acesso em: 5 jul. 2023.

[3] CANALTECH. Projeto de lei quer regulamentar inteligência artificial no Brasil. Disponível em: <https://canaltech.com.br/legislacao/projeto-de-lei-quer-regulamentar-inteligencia-artificial-no-brasil-248672/>. Acesso em: 5 jul. 2023.

‌[4] Projeto de lei brasileiro quer regular a inteligência artificial. Disponível em: <https://tecnoblog.net/noticias/2023/05/04/projeto-de-lei-brasileiro-quer-regular-a-inteligencia-artificial/>. Acesso em: 5 jul. 2023.

‌[5] Como é projeto de lei que quer regular a inteligência artificial no Brasil e o que dizem os especialistas. Disponível em: <https://epocanegocios.globo.com/tecnologia/noticia/2023/05/como-e-projeto-de-lei-que-quer-regular-a-inteligencia-artificial-no-brasil-e-o-que-dizem-os-especialistas.ghtml>. Acesso em: 5 jul. 2023.

[6] Projeto de lei brasileiro quer regular a inteligência artificial. Disponível em: < https://www.terra.com.br/byte/projeto-de-lei-brasileiro-quer-regular-a-inteligencia-artificial,f05d8a79bdef1fd58e410027e2d2204c48135nic.html> ‌ Acesso em: 5 jul. 2023.


Image

[Img1] RODRIGUES, J. Tendências da inteligência artificial para ficar de olho. Disponível em: <https://blog.culte.com.br/tendencias-da-inteligencia-artificial-ia-para-ficar-de-olho-em-2023/>. Acesso em: 5 jul. 2023

 
 
 

Comments


Assine a newsletter e fique sempre por dentro dos artigos que escrevo 

Obrigado(a)!

CONTACT

Thanks for submitting!

© 2020  by Andressa Siqueira. Proudly created with Wix.com

bottom of page