The revolution to come: the Swiss Army and Machine Learning

Frieden & Sicherheit

Science & Tech

In the coming years, the Swiss Department of Defence will increasingly make use of Artificial Intelligence and its sub-discipline, Machine Learning. Some processes of these technologies are still unknown and some parts still controversial. In this blogpost, several limitations relating to Machine Learning for security purposes will be considered.

Artificial Intelligence (AI) is the field of study devoted to making machines intelligent. One of its most prominent subfields is machine learning (ML). ML is an application or subset of AI that allows machines to learn from data without being programmed for that. On the contrary, machine learning programs improve automatically through their experience and by learning from the data. ML has the potential to bring important benefits, such as increased labour productivity and improved services in fraud detection or finance automation. It is also increasingly used to inform decision makers. As such, machine learning is forecast to play a growing role in the coming years within the Federal Department of Defence, impacting international security processes and the use of military force. Preparing for the consequences of the AI revolution is a critical task for the national security community. It becomes vital for this community to better understand its risks, benefits, and the changes that AI will bring. Machine learning is only as good as the data that powers it. Thus, given that ML systems are improved by large datasets, the accession of this data and its protection will be central in the Department’s activity in the coming years. In that regard, the Confederation will be competing with other governments and large tech companies for access to data. The Department will also have to engage with, and report to, external bodies in order to demonstrate its commitment to an ethical use of AI and its ability to address risks such as lack of transparency brought by ML-assisted decision-making. 

ML in decision-making – Black box ML, Deep learning, Discriminatory bias, and Interpretability 

Although machine learning can be successfully used to inform decision-making in a variety of ways, some types of ML, notably “deep learning” technic, cannot always fully explain how a system has reached its output nor how to interpret such results. The Department of Defence will need to prioritize the use of ML systems that are understandable and interpretable. The Department should typically avoid using “black box” ML. Such systems are so complex that it becomes difficult, sometimes even impossible, to fully understand how a decision has been reached. ‘Interpretability’ here refers to one’s ability to explain an ML system’s decision-making process and results in terms that can be understood by humans. A lack of clarity and understanding on how ML decisions are made, makes it unclear whether systems are behaving fairly and reliably. Consequently, prior to adopting an ML system, a significant amount of research must be undertaken to ensure that the systems used are “intelligible to developers, users and regulators”.

Furthermore, the Department will need to carefully monitor the trade-offs of using ML technologies. The ‘risk of gaming’ for example—which enables a person who knows how an algorithm works to manipulate it—could cause significant security problems if ML systems are used to prevent fraud or to pilot the transport of military supplies. 

 Case of Facial Recognition  

The use of ML systems in the case of automated facial recognition has been stirring significant debates and problems. Since ML systems rely on available data to function, the lack of available data for some demographics can lead to ML algorithms being less accurate for those groups. For example, research has shown that if facial recognition algorithms are trained exclusively on the faces of white people, the system will perform more accurately for this specific group. Unrepresentative, inaccurate, or incomplete training data can lead to risks such as ‘algorithmic bias’, a term describing discrimination against certain groups based on an ML system’s outputs. Bias can penetrate algorithms in numerous ways. For example, if the data used to train the systems include existing human biases, reflecting historical or social inequities (even if sensitive variables such as gender or race are removed), the systems will reproduce those biases. Consequently, the type of data that is fueling the ML engine system must be scrutinized to prevent orientating the system in a particular direction that will not be accurate or representative of reality. The Department of Defence will have to be particularly careful with the data used and reflect on the biases and imbalances it may include. Work needs to be done on the data prior to using it for ML systems applications. When large datasets do not exist, the Department must avoid using “synthetic data” created via computer simulations that may include biases and unknowns, particularly unfit for the defence and security sectors. 

The Department may prioritize the use of ML systems whose characteristics along with the system’s performance, safety, security and limitations can be reported simply, easily, and accurately on basic fact sheets. Fact sheets to monitor datasets used to train ML could also be a solution that enables developers to understand how one system is expected to perform in different population groups or locations. These fact sheets may be key in the expansion of ML’s use in various applications across the security sector. 

Automation of Tasks and a Specialized Workforce

In the coming years, artificial intelligence systems powered by machine learning may replace some of the human activity in the defence sector. Their added-value stands in being cheaper, faster, more scalable, and sometimes more accurate. The application of ML systems is particularly relevant for the defence sector’s activities in which human errors can be costly, particularly for data classification, anomaly detection, and optimization. Consequently, the workforce will need to be increasingly specialized in the use of those new technologies. The Department will need highly qualified people to understand, monitor, and adjust ML systems in accordance with its objectives.

Finally, the Department will have to contribute to the establishment of essential requirements that must be fulfilled by an AI system and the associated verification processes. Ethical and safety requirements will have to be clearly defined in the coming years. In the coming years, the Department will have to work with and report to various external bodies on its use of ML. However, to date, no clear no generally accepted guidelines for the implementation of AI systems exist. Several national and international bodies are producing industry standards for the ethical design of robots, autonomous systems, and AI standards. The Department’s activities will typically be influenced by such standards as when exploiting ML for facial recognition applications. The set-up of those guidelines and definitions will require not only a highly specialized technical expertise but also a more societal, philosophical perspective when considering how democracies can benefit from AI. 

 

Cover picture by Ali Shah Lakhani retrieved on Unsplash.