Beyond the hype: what Malawi’s justice system tells us about AI and access to justice

Drawing on discovery research led in Malawi, this insight explores how artificial intelligence could support, but not replace, legal aid systems under severe pressure. It reflects on what AI can realistically achieve in low-resource justice systems, and the conditions required for technology to reduce backlogs without undermining fairness, trust, or access to justice.

Justice innovation starts with people, not technology.

(Justice sector stakeholders gathered around a table during a research workshop in Malawi, discussing the use of digital tools to support access to justice).

Across many justice systems, artificial intelligence is increasingly presented as a solution to delay, backlog, and limited capacity. Nowhere are these pressures more visible than in low-resource settings, where legal aid services are stretched thin and access to justice remains out of reach for many. Malawi offers a powerful case study of both the promise and the limits of AI in this context.

This insight draws on discovery research I led in Malawi, working with legal aid lawyers, judicial officers, law students, and civil society organisations to explore how artificial intelligence could support access to justice in a severely resource-constrained system. The research examined the drivers of legal aid backlogs, the appetite for AI-enabled tools, and the conditions under which such technologies might be both effective and appropriate.

Malawi’s legal aid system faces an acute capacity challenge. Tens of thousands of active cases are managed by a small number of lawyers, while courts struggle with staffing shortages, paper-based processes, and limited digital infrastructure. Delays are not simply administrative inconveniences; they have real human consequences, including prolonged pre-trial detention and reduced trust in justice institutions.

Against this backdrop, stakeholders were clear that technology alone cannot “fix” the justice system. Chronic underfunding, human resource shortages, and structural inefficiencies remain fundamental constraints. However, the research also highlighted several areas where AI could play a meaningful supporting role if carefully designed and governed.

Promising use cases included AI-assisted legal research to reduce time spent searching for precedents, document automation to ease administrative burdens, public-facing chatbots to improve access to basic legal information, and case triage tools to help prioritise urgent matters. Importantly, these tools were consistently framed by participants as decision-support mechanisms — not replacements for legal professionals or judicial discretion.

A recurring theme across the research was the importance of context. AI tools must be grounded in local law, languages, and workflows to be trusted and used effectively. Generic, externally developed systems were viewed with caution, particularly where they lacked transparency or failed to reflect local legal realities. Human oversight, clear escalation pathways, and strong data governance were seen as essential safeguards.

The Malawi case also illustrates a broader lesson for justice innovation: technology must be designed around how people actually experience justice. Legal aid lawyers juggling hundreds of cases, students supporting clinics, and communities seeking basic information all interact with the system in different ways. AI tools that ignore these lived realities risk reinforcing existing inequalities rather than reducing them.

What emerges from Malawi is not a story of technological quick wins, but of careful, staged innovation. AI has the potential to extend capacity and improve efficiency, but only where it is embedded within wider justice reform efforts, supported by investment in people, infrastructure, and governance.

Smart Justice – Key takeaways

  • AI can support access to justice, but it cannot compensate for underfunded systems.
    Technology must complement, not replace, investment in people and institutions.
  • Localisation, ethics, and human oversight are non-negotiable.
    Trust in justice technology depends on accuracy, transparency, and alignment with local legal realities.
  • User-centred design is critical in low-resource settings.
    AI tools must reflect how justice is experienced in practice, not idealised assumptions about users or systems.

Together, these principles shape Smart Justice’s approach to research, design, and delivery — grounding our work in real justice systems and focusing on how technology can reduce barriers rather than reinforce them.

AI legal assistant: accelerating access to justice — Frontier Tech Hub

AI+Legal+Assist+Disco+Report.pdf