
Grading Rubrics
Download the Grading Rubrics here






Problem Statements
Submissions close on Saturday, 13 September 2130H!

About the Sponsor
Ministry of Law (MinLaw) – Legal Technology Transformation Office (LTTO) is a division within the Legal Industry Group that develops strategies and implements initiatives to strengthen Singapore’s status as a global legal services hub through technology. LTTO coordinates and drives legaltech efforts across the legal services sector, promoting innovation and responsible adoption of emerging technologies such as Generative AI.
Background
Generative AI (GenAI) tools are becoming increasingly accessible and sophisticated. Legal professionals are leveraging these tools to augment their work (e.g. conducting legal research, building case chronologies, drafting documents). At the same time, non-legal professionals, including business owners and members of the public, are turning to GenAI for assistance with legal questions, processes, and even drafting their own legal documents.
While these tools offer speed, convenience, and efficiency, they also present significant risks, particularly in legal contexts. Common risks include hallucinations, outdated or inaccurate legislation, case law, or legal principles, and the potential for misinterpretation of complex legal concepts that require professional judgment.
Because GenAI can produce responses that appear fluent and authoritative, even when they are inaccurate or incomplete, users may be misled into believing the output is correct. This creates the risk of misinformation and over-reliance on GenAI output, which may lead to misunderstanding of legal rights or taking the wrong course of action, which may potentially resulting in costly mistakes, legal exposure, or adverse outcomes.
Users must understand the limitations of GenAI and their responsibilities when using these tools:
-
They are using an AI tool, not receiving services from a legal professional.
-
Regulatory protections prohibit non-legal professionals from using AI tools to provide legal services to third parties.
-
AI outputs are based on model design and training data, not professional legal judgment.
-
Users accept the risks when choosing to rely on AI instead of seeking qualified legal advice.
Challenge
Create an interactive solution that allows users to safely experiment with GenAI for legal tasks, while learning and practicing responsible use through a “learn as you use” approach. The solution should meet users at where they are in their AI journey and provide hands‑on learning experiences on both the capabilities and limitations of AI in legal contexts, reinforcing user accountability.
The solution should minimally:
-
Provide a safe environment for users to explore AI-assisted legal tasks
-
Offer real-time coaching and feedback as users interact with AI tools, such as through contextual nudges, warnings, and explanations that teach responsible use and highlight legal risks (e.g. “Verify authority”, “Possible client confidential information detected – consider redaction”)
-
Educate users about AI limitations and their responsibilities in an interactive, user-friendly way
-
Communicate user responsibilities clearly, including disclaimers such as “This is not legal advice” and to seek professional advice.
-
Incorporate basic safeguards (such as those suggested in IMDA’s AI Governance Framework) to protect sensitive information and reduce the risk of misleading or unsupported outputs.
Reading Materials
Possible Features
-
Browser extension or plugin which analyses user prompts on AI platforms in real time, flags issues, and suggests better structured prompts with contextual educational tips.
-
Browser extension or plug-in which analyses AI-generated legal content and provides users with comprehensive risk ratings and explanations. The tool automatically assesses AI outputs on legal platforms, flagging potential issues including factual inaccuracies, outdated legal references, or overly definitive statements, with colour-coded risk ratings (e.g. potential of hallucination) and clear guidance on next steps such as fact-checking requirements or when to consult a qualified lawyer.
-
Redaction tool that detects potentially confidential information (such as personal identifiable data)before text is sent to a model, suggests or even automates redactions, blocks transmission until user confirmation, and maintains an audit log.
-
A web or mobile application that provides structured learning modules (e.g. interactive tutorials) where users practice common legal AI tasks using pre-built scenarios and sample documents. Users work through guided exercises such as drafting simple legal letters, researching basic legal concepts, or reviewing contract clauses, with the app providing real-time feedback on their prompt quality and flagging potentially problematic approaches.
Track Prize
-
$50 Grab voucher per participant for the winning team.
-
Opportunity to present winning prototype to Ministry of Law, including senior management.