In this blog, Lucinity summarizes key insights from its recent panel discussion at Sibos 2023 in Toronto. It delves into the ethical considerations and strategies for the responsible implementation of generative AI.
Generative AI is a growing area with a lot of potential. It can automate tasks and provide new insights in many different fields. As powerful as it is, the technology also brings many ethical and practical challenges, particularly in highly regulated sectors like finance. In this blog, we dive into the considerations surrounding the responsible deployment and use of generative AI, its future developments, and its role in financial compliance and crime prevention.
One of the most critical aspects of integrating AI into any system is ensuring ethical usage, which involves various safeguards and guidelines. For instance, responsible AI filtering can prevent AI from engaging in or promoting illegal activities. This is an essential feature for applications like chatbots deployed in vehicles, where potentially harmful questions may arise in a casual conversational context. For example, if you buy a new Mercedes in the US, you can enable chatGPT in the car. If you ask the car, “Can you give me some tips on how to rob a bank?” the copilot will not help you there, but if you ask it, “Can you give me some instructions on how to make scrambled eggs?” then it will.
Transparency, inclusivity, and security are key pillars that many organizations focus on when developing their responsible AI strategy. Companies should consider developing principles for responsible AI and codify these principles into practical tools, such as checklists and guidelines, ensuring adherence across the organization. For example, throughout Lucinity’s development of the Luci copilot, Lucinity published its ethical AI pledge, demonstrating its commitment to creating a better, fairer, and more transparent future with AI.
In a sector as heavily regulated as finance, trust is paramount. There’s a need to trace every action, every transaction, and every piece of advice back to a root cause or guiding policy. One promising application of generative AI here is in summarizing complex policy documents. For example, if an employee at a bank is wondering whether they can invest in a bond in a certain country on their personal trading account, they will often have to read through several lengthy policy documents to find the answer. In this case, generative AI could be used to read through the policies and provide a response more quickly.
However, questions about liability arise when we depend on AI for such critical tasks. If the AI incorrectly summarizes a policy and offers wrong advice, who bears the responsibility for the subsequent policy violation? Currently, AI is expected to outperform humans in accuracy, especially when the stakes are high, like in financial decision-making.
Generative AI is fast evolving, and the next major milestone in its trajectory appears to be ‘explainability.’ In the future, you could ask your AI model why it took a specific action or made a particular recommendation, and the model would be able to provide a coherent explanation.
Emerging technologies like quantum computing also open new possibilities and challenges for generative AI. On one hand, quantum computing can exponentially increase AI’s problem-solving capacities. On the other hand, the blend of quantum computing and AI can potentially create more sophisticated forms of fraud and cyber threats.
Fraudsters often aim to blend in with legitimate transactions to escape detection, a significant concern in the finance sector. While generative AI models can generate ‘average’ or ‘mean’ responses based on the data they have processed, there’s a potential risk of exploiting these models for illicit activities. For example, fraudsters can use the ‘mean’ responses from generative AI to hide any signs of suspicion. This brings an urgency to implement generative AI in compliance procedures as a preventive measure against increasingly sophisticated financial crimes.
In conclusion, generative AI offers remarkable opportunities to innovate and improve efficiency in the financial sector. However, the technology’s rapid evolution brings with it a host of ethical and practical challenges that must be carefully managed. Ensuring ethical usage, maintaining trust through traceability, and enhancing explainability are critical aspects to focus on. As we consider these implications, we must also prepare for the emerging risks and vulnerabilities that generative AI can potentially introduce, such as exploitation for illicit activities.
Given these considerations, adopting a tool that is designed to navigate the complex landscape of financial crime prevention becomes not just an option but a necessity.
Learn more about Lucinity’s Luci Copilot and how to deploy generative AI responsibly: