Last April, Derechos Digitales publicly presented the initiative Artificial Intelligence and Inclusion in Latin America. The project includes studies developed by specialists in four countries of the region on the implementation of technological systems with automation aspects from the public sector.
Until now, the characteristics of this type of technological deployment in the region have been scarcely documented. However, faster than any analysis, the implementation of digital technologies continues to advance silently, with few spaces for debate and citizen participation. This is evidenced by the cases analyzed in Brazil, Chile, Colombia and Uruguay.
It is worrying to observe how, before facing the challenges related to digitization and the establishment of adequate data governance procedures, governments strive to automate processes through technological systems that add new layers of complexity and opacity. Even more so when they can affect fundamental rights such as social assistance, employment, justice or health; or culminate in discriminatory interventions in people's lives by the State.
A new fetish
From the cases analyzed, one of the first issues that draws attention is the adoption of technologies that, before their deployment, lacked any evidence of their need or usefulness. In other words, “solutions” are developed and implemented to insufficiently described problems, since in general there is no previous diagnosis that justifies them.
This lack of diagnosis is critical in three aspects: 1) what purpose does the specific technological solution serve (including the analysis of its adaptation to the problem), 2) the preparation in the administrative apparatus for its implementation and 3) the capacity of citizens to understand the effects of the technological solution. Consequently, the implementation of complex technological systems is by itself problematic in a context that is insufficiently prepared for its deployment.
In the four countries that were analyzed in the study, the way in which the different initiatives are implemented reveals significant discretion and lack of planning on the use of technologies at the government level. Incidentally, this condition reflects a trend at the regional level beyond the four countries initially considered in our research.
The common feature seems to be an approach of “technological solutionism”: the expectation that digital technology will be the solution to complex political and social problems. In this sense, the enormous global promotion of artificial intelligence technologies makes them an object of desire for public servants willing to put citizens at risk to satisfy the image of a strong and modern State, regardless of its real "intelligent" capabilities”.
The foundations of the system
Although innovation at the state level is welcome and can be useful to deal with public management problems, it is unacceptable that initiatives that may involve the access and processing of immense amounts of personal data do not go through prior discussions and without having a framework of proper legality. Despite the absence of a specific discussion on the implementation of automated systems, the regulatory frameworks to deal with data protection in the countries analyzed are still very uneven.
As is known, the personal data protection framework is too limited to regulate artificial intelligence systems, since it only targets one of their edges: the collection and processing of information about people. However, it is that area of law where, due to its connection with new and intense forms of processing, regulatory advances are observed, such as specific guarantees to favor the "explainability" of automated decisions or to integrate human review of those decisions. The Brazilian attempt to incorporate both aspects into its regulations, stopped by a presidential veto, reveals a significant lack of political will to put the interests of citizens above the imperatives of technology industries, which is surely not exclusive to one country.
At the same time, the existence of data protection regulations in the four countries analyzed have not ensured that the principles and rules of this matter were at the center of state attention. On the contrary, a common feature is the variation in the legal basis for data processing, with consent as a sometimes relevant factor, but often with different databases and diffuse legal authorizations for the inclusion of information from various sources. In other words, the public apparatus extends its function based on its legitimacy, without taking due precautions in obtaining the consent of the people who become the object of state intervention.
This is the kind of reason that makes explicit the demand not only for a regulation for the control of personal data, but also for a public entity to control it, dedicated, specialized and with powers to intervene in the action of other state agencies if necessary.
But as we have said, the regulation of personal data is just one edge. An important part of the state action goes through the impulse of artificial intelligence, understood as a way of national industrial development. However, this is not enough without a complete vision of the different aspects that come together in this problem. If, on the one hand, some governments try to propagate their general AI principles or strategies, on the other, there is very little specificity in how those documents will guide the implementation of technologies from the public sector. For example, what will be the importance they will give to fundamental rights and transparency.
Unlimited lack of transparency
Perhaps one of the most relevant problems that we observe when it comes to the state acquisition of technologies is the absence or insufficiency of mechanisms for the direct integration of the interests of citizens. In other words: the lack of joint analysis with relevant system actors prior to each deployment, the absence of control mechanisms shared by multisectoral interests, and the lack of subsequent evaluation mechanisms.
The shortcomings are multiple: in addition to the lack of transparency traditionally associated with automated decision-making systems, there is the exclusion of public participation in the deployment of the systems and the absence of open and transparent evaluation mechanisms in the achievement of their goals.
The problems that derive from this enhanced lack of transparency are also numerous and even entail the insufficiency of diverse perspectives on the possible impacts. The public decision based solely on machines is an extreme form of technocracy, which can erode the legitimacy of the state apparatus and calls into question the real link with the problems that in theory they are intended to address.
Towards a governance agenda from the governed
From the collection of empirical data on the implementation of automated or semi-automated systems, it is possible to search for common factors that facilitate, based on the Inter-American Human Rights System, an imminent discussion on the development of regional standards for their use. These should guide the production of regulatory frameworks that make the implementation of technologies, from the public sphere, an expression of the aspirations of citizens over the State, and not of the aspirations of an empty modernity.
But this is just an initial step and regulatory development is far from responding to the immense challenges that arise. Digital Rights will continue working to generate evidence and exploring new spaces for dialogue on the future development of technology in Latin America.
Review the study “Artificial Intelligence and inclusion” here.
This article was written by Jamila Venturini, J. Carlos Lara And Patricio Velasco for Digital Rights and is licensed under CC BY-SA 3.0 CL, for your free use and adaptation.