AI can feel overwhelming: there are so many resources available and no clear distinction on what to trust, where information comes from, or any guidance on what to do with that information.
As part of our commitment to empowering financial institutions, this piece aims to offer up a trusted voice in AI with a candid take on some of the burning questions that you’ve always wanted to ask.
Rishi Sharma, our Senior Director of AI, shares his thoughts on some pressing questions, common misconceptions, and what the future holds.
One major misconception is that AI gets smarter automatically. It’s very similar to humans - when humans want to learn something, they must be taught how to do it. Once we are taught, we enter a test-and-learn process, where we learn which actions are correct and incorrect based on feedback from a teacher. Similarly, an AI system can’t know if it is correct or incorrect unless it is told so. In fact, if an AI system were to automatically update its actions without feedback, it’d essentially be guessing, which can be really harmful.
So, while AI systems do not learn automatically, successful AI teams are the ones that can streamline this process so that it feels automatic and just as simple as teaching another human how to change a lightbulb or how to bake a cake.
This leads to another common misconception: AI systems act differently than humans. While there are methods to teach AI non-human behavior, the majority are still guided by humans. AI systems are only as good as the data and decisions we give them. Meaning if there are biases present in the answers or the data, their knowledge reflects that bias. The knowledge and data we use to train AI are massive and can often contain hidden, harmful biases due to a lack of context or variety. If you’re building AI or trusting someone to build AI for you, how they train and maintain their models is crucial.
At Posh, we strive for safe data practice by using datasheets to diligently describe our data and behavioral tests to phase out biases. It’s really important when choosing an AI partner that you look into these risks since negligence can lead to some less-than-ideal outcomes. We hope to raise the bar for what is considered acceptable AI use in the credit union community.
I think the relationship is a lot like our current relationship with the internet. The more savvy you are with using the internet– the more effectively you can use it to solve your problems. The internet won’t solve the problem for you; you still have to direct it. AI is like an enhanced internet, meaning it has knowledge and the ability to use context about our world and things that matter to you. Knowing what word follows “The New York Yankees beat the Boston Red ___'' or what a dog looks like all require context.
Even more, AI has the ability to perform actions. Once one of our chatbots knows you need to make a transfer, it can execute that transfer. So, just like we’ve seen some really internet-savvy people become a lot more efficient over the last 30 years, I expect we’ll see more AI-savvy folks become more efficient over the next 30 years. For that reason, I don’t think there is anyone who cannot or should not try to learn how to work with AI.
This is the most important question. A community is formed through shared goals among individuals. Some of the earliest conversational AI has been to help businesses make expensive operations more manageable– like re-directing heavy volumes of customer service problems to an automatic service. We want to go even further! We want to help our clients empower their own employees and their customers to reach their own goals and establish new, previously unimaginable ones! Maybe a CU employee feels like they lack the time to offer a personal touch because other routine, mundane, and time-consuming tasks take up too much time. With Posh, they can direct our bots to help them do these so they can focus on what matters. It’s only effective as a true partnership.
The banking community is a vibrant one. Smart people already know where they want to go and how they envision success with the help of AI. We’re grateful to be part of that conversation. So, rather than just building tech, we’re in the business of giving this community the flexible tech and the support to make it work for them!
I think the whole “AI is taking over humanity” is a popular plot to think about, only because it’s been replicated so many times in film and TV. But the true danger of AI involves how we, as humans, build and use it. AI has existed for decades, and some have used it to shape the world to their advantage. I’m happy that AI is still relatively a new frontier for the financial industry because we can work with our partners to ensure that how we bring this technology to the field empowers this vibrant community, strives for equity, and does no harm to it.
The future is a long time, [laughs]. In the short term, I think we’re going to increasingly be challenged to figure out how to best use AI, but that doesn’t mean we should stop using it. I think it means we need to ask more questions and increase our curiosity. No doubt this can be dangerous, but it can also be really empowering.
The next wave of AI should really be around giving your clients and customers superpowers! How can we allow them to do things they could not imagine being able to do? Whether that’s through helping them automatically fill out forms or providing your own personal assistant AI that knows what your exact financial goals are. These more personally ingrained interactions and everyday empowerment opportunities are the way forward.
My advice to companies is: Stand up for yourselves, your customers, and your industry. Demand your AI providers explain how their technology works, how they take care of the data and biases, and how they’ll help empower you to make the system work for you.
For AI providers: It’s time to start understanding more about these systems and adopt a more risk-averse mindset to identify dangerous side effects to ensure that they don’t come to reality.