How does AI change the game in terms of security?
The omnipresence of artificial intelligence (AI) in our current society makes it an essential tool in the field of security. However, the growing use of this technology raises legitimate concerns regarding security, transparency, and control.
According to various definitions of AI, they all converge on the idea that it is a form of computer intelligence that draws inspiration from or imitates human cognitive abilities to enhance decision-making and learning. As such, AI is present in numerous domains, including information research and analysis, facial recognition, recommendations, predictions, and decision execution.
However, the proliferation of security technologies raises questions of trust, security, and societal impact. The lack of explainability of AI can lead to risks such as decisions based on erroneous data. To avoid these pitfalls, it is essential to make algorithms transparent to verify the quality of the data used and the factors considered in the decision-making process.
Furthermore, concerns about direct or indirect bias highlight the need for reflection on human beings and their complexity. The risks associated with the criminal or accidental use of big data can turn algorithms into discriminatory, silent, and systemic tools. Culturally, these issues pose social challenges related to the perception and use of AI.
In the field of security, AI can be a valuable ally with multiple applications such as weak signal detection, facial recognition, and crowd management. However, it is imperative not to rely solely on AI for decision-making and to consider potential security risks associated with its use.
Thus, implementing an “empowering” approach that allows humans to maintain control over the machine is crucial. This approach takes precedence over a transhumanist approach where the machine replaces humans. Taking into account algorithmic biases and transparency in decision-making are also key elements to ensure responsible use of AI in the security domain.
“Artificial Intelligence and Security: Ethics at the Heart of the Challenges”
The question of ethics is at the heart of using artificial intelligence in the field of security. What are the different solutions that can be considered to address these major challenges?
Ethics is presented as a framework to ensure that the use of AI does not undermine moral values and human relationships. It is essential to establish guidelines for the design, use, and control of AI, as well as the adoption of an “ethics by design” approach. This approach involves designing AI applications while considering human concerns and avoiding oppressive and unworthy algorithms.
To ensure controlled design throughout the manufacturing chain, a three-level conceptual analysis can also be employed. This method takes into account the impacts of AI on human interactions and organizations, aiming to avoid excessive reliance on this technology.
The “ethics by evolution” approach is a systemic method that facilitates the creation, deployment, use, and monitoring of innovative algorithmic systems to promote adherence to ethical rules. It relies on a participatory and collaborative approach among various stakeholders to anticipate and manage ethical risks.
Furthermore, collaboration between the public and private sectors is crucial to establish the necessary standards and regulations for implementing AI in society and ensuring the security of all. Stakeholders in the security domain must work together to create a robust ethical framework for its use.
In conclusion, the use of AI in the field of security must adhere to strong ethical principles, considering a continuum among developers, governments, and businesses. Implementing approaches such as “ethics by design” and “ethics by evolution,” as well as fostering collaboration among security stakeholders, are effective solutions to address the ethical challenges posed by the use of AI. Ensuring responsible use of this technology is crucial to guarantee the security and protection of fundamental rights for all.
Fabrice Lollia, PhD in Information and Communication Sciences.