Search

Stop guessing at AI ethics: Use the code you already know

By Francois du Toit, Founder at PROpulsion
6 November 2025 • 6 min read1 reads

Have you or someone on your team ever pasted client information into ChatGPT? This is happening in advisory firms right now. Someone needs a quick summary of a client meeting, they copy the notes into a free AI tool, and 30 seconds later, they have clean bullet points. The work feels efficient, the risk feels distant, but the breach is real. You do not need to wait for new regulation before managing this. You already have a framework that works: the FPI Code of Ethics.

Map the risks to principles you already know

AI introduces real risks: bias in algorithms, data breaches, lack of explainability, and the temptation to defer judgment to a machine. These are not new ethical problems. They are new versions of problems the FPI Code already addresses.

The Code gives you nine principles and each one translates directly to AI use.

  • Clients First means AI must serve the client’s best interest, not just speed up your workflow. Using AI to generate multiple scenarios for discussion helps them. Using AI to speed up advice without checking if the output suits the client does not.
  • Integrity requires you to stand behind the output. If you cannot explain why the AI suggested something, you cannot use it. You must disclose AI involvement and choose tools you can audit.
  • Objectivity means AI is a second set of eyes, not a replacement for your judgment. Accepting the first answer without questioning it hands over the decision to the machine.
  • Fairness demands you test tools with different client profiles before you trust them. Run the tool with varied cases and make sure it treats everyone fairly.
  • Confidentiality is the area most firms struggle with. Public AI models are not secure. Free ChatGPT with client names and financial details is a POPIA breach waiting to happen. Best is to use local AI tools or enterprise solutions with data protection agreements.
  • Diligence requires review and validation before use. Copy-pasting AI output directly into a Record of Advice is not diligence. Review it. Fact-check it. And document your verification process.
  • Professionalism means evaluating tools against clear criteria aligned with your values; not adopting the latest tool because your competitors are.
  • Competence means understanding what you are using. If you do not know how the tool works or its limitations, you are not yet competent to use it. Train on the tool’s capabilities, test it thoroughly, and know when not to use it.
  • Accountability is the principle that closes the loop. You take full responsibility for AI outputs as if they were your own work. Document how AI was used, verify all outputs, and be prepared to justify decisions to clients and regulators.

According to POPIA Section 71, human review is required for automated decision-making. The Code gives you the ethical framework, while the law gives you the legal floor.

Choose tools that match your ethics

Not all AI tools are equal. Before you adopt a tool, ask four questions.

Is it POPIA compliant? Does it store data locally or in South Africa? Does it have a data protection agreement that prevents reuse of client data? Can it produce explainable outputs that you can audit and justify?

If the answer to any of these is no, walk away.

Enterprise and business versions of tools like ChatGPT, Claude, or Microsoft Copilot often have better data policies than free versions. Local AI tools that run on your own servers give you full control. Whatever tool you use, insist on exportable data and open APIs so you are not locked in.

Build a crawl-walk-run plan for your firm

Start small. Stay in control. Scale responsibly.

  • Crawl means internal use with low risk. Use AI for transcriptions, drafts, and note summaries where no client data is involved.
  • Walk means client-facing use, but always supervised. Generate risk profiles or meeting summaries, but a human must first approve every output before it reaches the client.
  • Run means high-impact use like strategy modelling or advice generation. Only move to this level once you have policies, training, and oversight in place. Assign someone in your firm to own AI governance and monitor tools regularly.

Write a simple AI use policy by answering four simple questions: What tools are approved? What uses are prohibited? Who is responsible for oversight? What is the plan if something goes wrong?

Then train your team on the policy. You must review the policy every six months as the tools evolve.

Ethics is your edge

Using AI ethically is about building trust. Clients will ask how you use AI, and regulators will ask how you govern it. Your answer will matter greatly. Start this week by auditing your current AI use and draft a one-page AI policy. Or pick one low-risk use case, run a pilot, and document what you learn. The smallest viable next step is the one that protects your clients and keeps your firm in control.

Stay curious!


Subscribe to our free newsletter

Stay at the forefront of financial advisory excellence with MoneyMarketing's weekly insights. As a professional adviser, you'll receive carefully curated content that enhances your practice and client relationships without cluttering your inbox. Our commitment to delivering only relevant, actionable intelligence helps you make informed decisions that drive your business forward. Join our community of leading financial professionals today and transform your practice with our complimentary newsletter—because your success is our priority.

 
Previous Article
How Futuregrowth is building better venture capital 
Next Article
AIIM expands Africa's largest cold storage platform

Related articles