What I Learned About AI and the Future of Legal Practice from Industry Experts
In a recent roundtable, more than 30 South African legal technology experts shared their views on how artificial intelligence is shaping the future of legal services. The discussion explored current AI use in tasks like contract drafting and legal research, the limitations in complex legal work and how it’s transforming contract management and due diligence. This set the stage for a deeper look into where AI is making a measurable impact across legal tasks.
AI Is Already Transforming Legal Work Within Limits
The use of AI tools is being more widely adopted for legal professionals. AI use cases include drafting standard contracts, reviewing clauses, summarizing legal documents and handling preliminary research.
Although more use cases are being identified for the use of AI tools to assist in legal work, participants highlighted important limitations. Tasks like bespoke drafting which require legal judgment, understanding context, tone, still firmly rely on human input. Concerns were also raised about relying on AI for legal research, particularly in regions like South Africa. Since most AI tools are trained on US and UK law, they often miss local nuance. Until more region-specific tools are available, human review remains essential.
A Smarter Way to Handle Contracts and Risks
One of the most interesting use cases where legal teams are seeing the benefits in using AI tools, is contract management. AI is being used to turn contracts intro searchable, useful data sources.
M&A deal tasks and due diligence was also identified as an area where AI could improve efficiency. Rather than treating it as a long, manual review process, teams are starting to approach it more like data analysis, uncovering patterns and insights that can inform business strategy.
The Human Skills Still Matter Most
Even with AI in the picture, the importance of traditional legal skills hasn’t changed. The discussion highlighted the risk of newer lawyers relying too heavily on AI and missing out on learning how to think like lawyers.
The key takeaway? AI should support learning and not replace it. Start by learning how to research, draft, and analyse on your own, then use AI to work smarter. Emotional intelligence, communication, and client trust were also highlighted as things AI can’t replicate. Clients want to feel understood, not just given the right answer.
Governance Can’t Lag Behind Innovation
Many legal teams are still not included in AI-related decision-making, which is a gap that needs to be addressed. The discussion highlighted practical examples of how legal departments can play a more active role, particularly in the governance of AI tools. Some organisations have already established “AI Greenlight Committees” that bring together legal, compliance, cybersecurity, and risk functions to assess new tools before they are adopted. Legal’s involvement must be proactive, especially when the risks include client privacy, legal obligations and data security.
Participants shared valuable insights into responsible Ai adoption, including lessons learned during product evaluations. Leading teams now run structured proof-of-concepts using identical datasets and standard scoring frameworks across multiple tools. They insisted on strict controls, anonymous data and exit clauses. Teams have also become more reluctant to engage with any technology service providers, who didn’t offer a trial period to test use cases. Their approach serves as a valuable reminder that the same level of due diligence applied to legal contracts should also be applied to AI tools as these technologies are becoming integral to the legal ecosystem.
AI Risk and Accountability Still Land on People
When asked who is ultimately accountable for AI-generated errors, the response was unanimous: the responsibility still lies with the legal professional. AI may support the work, but it does not replace professional judgment.
Intellectual property violations were cited as one of the most pressing risks, especially as generative AI tools are often trained on publicly available data. Until regulatory clarity is established through case law, legal teams must be cautious when using AI tools to handle proprietary or client-related content.
Final Reflections: Innovation Needs Guardrails
Some firms are already including AI governance in their legal risk registers. Others are creating internal AI policies or deploying tools within ring-fenced cloud environments to protect privileged data. Several comments mentioned that it is critical involving procurement, IT, compliance, and legal from the earliest stages of vendor engagement and not after the decisions have been finalized.
These aren’t just best practices. They’re survival strategies where regulation is lagging and the pressure to innovate is mounting.
What to Take Forward
As AI continues to improve, it undoubtably reshapes how legal professions work. Its integration must be calculated and measured. While the tools offer remarkable potential to improve efficiency and uncover new insights, their effectiveness depends on responsible deployment, human oversight and the preservation of core legal capabilities.
AI is not a replacement for legal professionals; it is a powerful complement to their skills. Success will depend on how well legal teams balance automation with accountability and how confidently they lead in both domains.
Co-Operative Computing has been a legal technology provider for more than 33 years. They have assisted law firms, large and small as well as corporate legal teams across different industries. Co-Operative Computing is a leading technology provider specializing in document management, workflow automation, AI tools and more. Why not use a local, trusted provider who will partner with your goals to achieve success.