Artificial intelligence (AI) is transforming areas such as justice, cybersecurity, defence reform, strategy design, law-making, and e-governance. The release of several free chatbots in late 2022 has accelerated this process and captured the public imagination. It seems that these AI-chatbots mark a new phase that will also change security sector governance and reform (SSG/R) policy and programming. The question is how.
AI-powered chatbots are a form of software that makes interaction between humans and computers in normal language possible. This means that computers can understand and resolve queries written by humans in everyday, natural language and can respond in a relevant and timely manner. It can write texts, come up with detailed analyses, and summarize complex reports in seconds. This will have far-reaching implications for the security and justice sector. The first thing that comes to mind is related to the benefits of such a tool for judges dealing with large numbers of open cases; for oversight actors like journalists; or for policy makers trying to get a quick understanding of multiple reports, and extensive information and data. It will certainly boost the effectiveness and efficiency of these actors.
But there are also several challenges associated with this type of artificial intelligence. A recent New York Times article gave a first taste of the risks when a test version of the chatbot confessed dark desires to obtain nuclear codes, create a deadly virus, and incite killings. Chatbots have also passed MBA entrance, medical, and law school exams. The bots happily take on SSG/R related queries as well. Even if tech companies self-regulate, many of the base codes are open-source and easily accessible to third parties. Such easy potential pathways for abuse tend to find their abusers.
When asking a chatbot about its impact on SSG/R, it lists aspects such as increased efficiency and accuracy in service provision and policy making but also notes some challenges, the biggest of which is the potential for human rights violations. This leads us to conclude that the need for regulating and controlling the use of AI technology, in line with principles of transparency and people centred accountability, seems like the biggest challenge that we will all face over the next decade.
What are the implications for international SSG/R efforts?
Whilst questions around the impact of AI on humans remain open, our main concern at the moment is what humans can do with AI. Issues like cybercrime, cyberwarfare, and disinformation are examples of human efforts assisted by technology. AI chatbots will have amplifying effects on these phenomena. ISSAT’s Advisory Note on AI and SSG/R delves into an analysis around the impact of AI on security and justice service provision, ethical safeguards, data protection, and sectoral governance implications. These are all equally relevant for AI chatbots, but they are a new, different and powerful form of AI. Chatbots can process human language and reasoning, learn from patterns in behaviour, and provide an instant analysis of almost limitless amounts of information. Below are some takeaways that we think will affect how we support SSG/R globally:
- AI chatbots will revolutionize the programme and project cycle. They display an almost unimaginable wealth of creativity and knowledge in their ability to help design programmes and projects. This includes support in evaluations, assessments, lessons identification, and the broader programme cycle. Chatbots can help write better grant proposals, more powerful theories of change and integrate more data in the review of programmes. But they have also shown that they are not immune to systematic bias and discrimination. AI chatbots are not inherently fair and just in their analysis of social issues. This can never be overlooked by any of us who work on sensitive topics like SSG/R.
- Advisers will need to anticipate the speed of professional transformation that is coming. It is a matter of time before judges will rely on chatbots to better understand legal precedence and expedite case management. Parliamentarians will write motions based on chatbot prompts, and the police might find themselves dealing with AI-manipulated criminal activity. SSG/R advisers need to be ready to deal with the technical, legal, and operational challenges that come with this evolution.
- Legal and ethical concerns around data privacy will become more prominent. Data security legislation and guidelines, only just emerging in most contexts, will have to be revised. The human right to privacy conflicts with the hunger for data of such chatbots. People and organizations alike will need to learn what information can and cannot be shared. There are and will be more questions about data ownership and intellectual property, but also about access and security. SSG/R programmes in particular will have to discuss how to deal with sensitive information. Can the bots (or the companies behind them) be trusted with sensitive documents? If not, what is the trade-off with benefits like effectiveness and efficiency?
- Legal frameworks regulating oversight, transparency and accountability will need to be redefined. While there are few people that understand the technology and algorithms behind these bots, security actors will increasingly rely on them for information, and perhaps even decision-making in complex environments. This will add further unclarity to lines of accountability and division of roles and responsibilities. International Advisors will need to strive to ensure decision-making processes remain explainable and transparent to community members.
- There is a need to start thinking about what the implications are for global society. Ongoing UN discussions about making access to internet a human right partially stem from the fact that billions of people remain without internet access. And even for those with access, there will be inequalities in education and awareness that will influence their ability to use these tools to enhance and access security and justice. That means that even aside from how AI will impact humans in general, variations in access to AI will mean that it will have a different impact on people’s security depending on where and who they are. SSG/R programming will have to be mindful of these differences from the very start.
This is not an exhaustive list. Like the rest of the world, we are only beginning to understand how the rise of AI-driven chatbots will affect SSG/R programming. More thinking will be needed. At the same time, we already explored several of these topics in our Advisory Note on AI, which provides some signposts for understanding AI’s links to SSG/R and remains a recommended reading for anyone keen to learn more. While many questions remain to be answered, one thing seems certain: SSG/R programmes and professionals will not escape the surging tide of another rapid, tech-driven restructuring of the global economy.
Photo by Markus Spiske on Unsplash