Could machines deliver fair sentencing in the courtroom of 2035? That was the question explored by innovative Australian law firm Lander & Rogers during an opening-day session at SXSW Sydney.
- Through a fictional embezzlement case set in 2035, the firm examined the ethics of AI-led justice in a world where humans and machines share decision-making power.
- Judge Nell Skinner, Oxford-trained ethicist Dr. Peter Collins, and Lander & Rogers partner Matt McMillan debated an AI sentencing decision, sparking lively discussion on the balance between technology and human judgement.
- The session, led by Courtney Blackman of Lander & Rogers, combined interactive storytelling, real-time audience voting, and deep ethical and legal inquiry delivering a thought-provoking glimpse into the justice system of tomorrow.
Lander & Rogers returned to SXSW Sydney with a provocative legal experiment that asks whether artificial intelligence should act as judge, jury, and executioner. Building on last year’s popular ‘AI versus lawyer’ courtroom battle, the firm’s AI Lab staged a futuristic fraud case set in 2035 sparking debate on the ethics of automated justice.
SXSW Sydney was abuzz with anticipation as a crowd gathered for one of the festival’s most consequential sessions: ‘AI as judge, jury, and executioner: the ethics of automated sentencing’. The event delivered a powerful blend of interactive storytelling, live audience participation, and deep ethical exploration, challenging attendees to imagine the courtroom in ten years’ time.
Immersive storytelling meets legal innovation
Led by Courtney Blackman from Lander & Rogers’ innovation team and AI Lab, the session invited the audience to step into a fictional future where an AI judge presides over a high- stakes embezzlement case. Drawing on digital evidence, sentiment analysis, and risk algorithms, the AI delivered a sentencing decision that sparked a lively debate among the panel and the audience alike.
Attendees, ranging from legal professionals and technologists to students, AI enthusiasts and policymakers, were more than spectators. Through real-time voting, they weighed in on whether AI should make critical judicial decisions, grappling with the tension between empathy and efficiency, and between human judgment and machine logic.
AI, grounded in ethics, could transform the justice system
The panel brought together three of Australia’s most respected voices in law and ethics.
Judge Nell Skinner, president of the Children’s Court of NSW, offered a judicial perspective on the evolving role of human judges in an era of automation. She was joined by Dr Peter Collins, an Oxford-trained ethicist, who explored the moral complexities of AI-led sentencing and the challenge of embedding human values into machine logic. Matt McMillan, a technology lawyer and partner at Australian law firm, Lander & Rogers, examined the legal risks and safeguards needed to protect the integrity of the justice system in an AI-driven future.
When asked if it was ethically acceptable to delegate life-impacting decisions to machines and whether AI could make more consistent and less biased decisions than humans, Dr Collins said: “As an example, I would use AI, if I were a doctor, to improve clinical care. But I would always leave the decision making to the doctor and the nurses. We do it [use AI] in education, we do it in all sorts of fields. It’s not exceptional. I think the thing that AI is brilliant at is sweeping up data.
“We have to be very thoughtful. We have to work through this in a slower and thorough way. We have to work out where the reliable data is and also whether the data has integrity. Unfortunately, when it comes to crime statistics, data has been falsified.
“Ultimately, AI is actually more ethical than human beings because it’s not necessarily swayed by bias. The limitation of our decision making is the limitation of our cognitive function. We’re hardwired for stereotypes.”
On whether judicial officers—human or machine—should reflect community expectations in their decisions, Dr Collins rhetorically questioned: “Who sets the community expectations? Social media is brilliant in swooping up community perceptions of safety and turning in to a ‘lock them up ’mentality.”
He went on to say: “When we’re afraid as human beings, we revert to type; there’s a rush to judgment. That’s called heuristics in behavioural ethics. This is the easy way for us to interpret community.
“But judges and the wider community need to weigh up and think about what community expectations are; what’s really in-line with community expectations and where they are subject to being diverted to rushing to judgment or making quick decisions”.
AI as a GPS system
Mr McMillan talked about how a machine making a mistake in a future matter would be dealt with and what a future appeal process might look like: “We’ve all used ChatGPT at some stage, got a response, and thought that’s not quite right. But when you translate that to a courtroom setting and the AI hallucinates, or it makes a mistake, it misreads a witness statement, or it makes a recommendation for too harsh a sentence in the circumstances, then we’re starting to play with someone’s liberty. And that raises the questions: who should be responsible for that? Is that the coder? Is it the judge that relied on it? Is it the agency that’s rolled out the AI solution?
“When you look at our traditional liability frameworks, they often assume that there’s a human that you can point to. There are concepts of duties and standards of care and intent that you can trace back to. But AI muddies that because what if it is bias in the data that the system’s been trained on? What if it is a glitch with the algorithm? What if it is an edge case which hasn’t actually been thought of by the parties?
“And if we hold developers liable, then that can start to stifle or chill AI innovation in the justice system and all the wonderful things that AI can actually do for the justice system in terms of the efficiencies. On the other hand, if we’re shielding the developers, then it can really impact people’s recourse in those circumstances. I think a middle path might be to look at AI tools in a courtroom setting as high risk, and to put them through rigorous certifications and ongoing audits so that if a mistake does arise, you can pinpoint the error and trace back through to where the failure has occurred in the process.”
On the appeals point, Mr McMillan said: “You could foresee a situation where a person asks for a new hearing, but also asks for a code review alongside that. In those circumstances it’s not just reviewing the same facts, but it’s looking at the AI system itself. What were the inputs into the AI system? What were the guardrails? And what are the outputs and the confidence levels around those outputs?
“At the end of the day, machines aren’t there to replace judges. I look at AI for judges a bit like the GPS system. It can provide you with helpful directions, but the driver remains responsible.”
Human rights are part of the code
Coding a machine for justice that involves nuanced interpretation and discretionary judgment was debated with Mr McMillan stating that “it is incredibly hard to encode because it’s not just about logic. It goes to the point of legal fidelity – having the code be faithful, not just to the letter of the law, but what the law is trying to achieve, its purpose and its context.”
Dr Collins added that: “We’ve already got some codes in which AI could integrate with: they’re called human rights. You go back to the French Revolution; you go back to the 13th century and we ethicists always claim these traditions first. Every state government and territory government in this country has a human rights test on its legislation… the human rights test is important. It’s a huge part of our community sense of who were, are and what can lead to human flourishing and to justice for all.
A community affair
Because the justice system touches everyone, the discussion extended beyond the panel to include the audience. The interactive format encouraged attendees to reflect on their own beliefs about justice, technology, and the future of the legal profession specifically on whether they thought intelligent machines should make judicial decisions, if judges should be human only, or if they would be comfortable with a hybrid system where both humans and artificial intelligence work together. By the end of the session, a clear majority of participants agreed that a collaborative system—combining human judgement with AI capability—would deliver the best outcomes for future generations.
One audience member remarked: “We’re lucky to live in a country where—for the most part—people trust lawyers, judges and juries. In other societies, a criminal might prefer an AI judge over a human one.”
Another addressed thoughts on humanity as a whole: “I don’t think humanity only belongs to humans anymore. AI can learn and be fed what humanity is about.”
Bias was a word repeated throughout the conversation and another audience member commented: “One of the risks of working with AI increasingly is that humans fall prey to automation bias. Even in the 19th century, and as well today with AI, people defer to models’ judgments, grow more complacent alongside tech, and speed to judgement when collaborating with AI.”
And there were thoughts on race and privilege including: “I’m not sure how you can argue that AI is better than humans when it comes to decision-making because humans are biased. AI is even worse because it amplifies and perpetuates these stereotypes at a greater speed and scale compared to humans. As a person of colour, I can’t afford to have an algorithm make a decision for me. See COMPAS software scandal, Amazon secret hiring tool, Microsoft Tay…”, while another audience member commented that when “rich, financially privileged humans invest in developing AI, there may be a risk of creating AI that [continues] to privilege rich people.”
Future justice
The session surfaced a range of critical questions for the future of justice:
- Who should hold ultimate authority in sentencing: humans or machines?
- Would offenders, victims or the public trust machines to make sentencing decisions?
- Can AI truly understand human behaviour and context?
Courtney Blackman, who developed and produced the session, celebrated the event’s impact and the audience’s engagement: “Last year at SXSW Sydney, we asked whether AI could win a court case. This year, we pushed the boundaries further by imagining a future where AI presides over the courtroom itself. Bringing together a sitting judge, a globally respected ethicist, and one of Australia’s leading technology lawyers, we sparked a conversation that was as bold as it was necessary. The engagement from the SXSW Sydney audience was exceptional—not just curious, but deeply thoughtful. It was a defining moment for legal innovation championed by Australian law firm Lander & Rogers, where the public, not just legal professionals, were invited to rethink the role of technology in justice. I hope every attendee left feeling not only more informed but challenged to consider their place in a rapidly changing future.”
The legal voice of SXSW Sydney
The ‘AI as judge, jury, and executioner: the ethics of automated sentencing’ session builds on the success of 2024’s session ‘Can AI win a court case?’—a media sensation that was featured internationally and adopted into the University of Technology Sydney’s legal technology curriculum. This year’s event reinforced Lander & Rogers’ prominence within Australia’s legal innovation scene and its commitment to exploring how emerging technologies will shape the future of law.
‘AI as judge, jury, and executioner: the ethics of automated sentencing’ was one of SXSW Sydney 2025’s featured sessions and took place on opening day: Monday, 13 October, 12.30pm – 1.30pm at the ICC Sydney: International Convention & Exhibition Centre in Sydney, Australia.