Home » News » How Wall Street Firms Are Warming to AI—With Help From Silicon Valley

How Wall Street Firms Are Warming to AI—With Help From Silicon Valley


January 9, 2024 | Originally published on The Information | By: Lauren LaCapra

Big banks were slow to adopt cloud services, initially nervous about storing their sensitive customer data on other companies’ servers. But when it comes to generative artificial intelligence, banks and other financial services firms have been early adopters.

Goldman Sachs, for instance, has more than 1,000 developers using generative AI for coding and plans to expand use of the technology to its entire 12,000-person engineering team by the end of 2024, Marco Argenti, the bank’s chief information officer, said in an interview. The bank is also testing large language models for summarizing and drafting documents and performing complex queries on public filings, loan documents and other items. It has used both open-source LLMs and ones made by Microsoft, OpenAI and Google.
Morgan Stanley, meanwhile, was one of OpenAI’s first big customers, collaborating with the tech firm on the development of an internal chatbot. The bank is now expanding the uses of tools it has developed with OpenAI, executives involved with the effort told The Information. This year, it plans to roll out more functions that help with information retrieval, workflow problems and quality control, executives said.

Asset management firm BlackRock uses Microsoft, which sells its own products based on OpenAI technology. In December the firm began introducing AI copilots—AI-powered chatbots that can quickly parse data or generate text in response to written questions—for both employees and clients that use its Aladdin software. One of those products, eFront Copilot, allows clients to use natural-language questions to create dashboards that help them visualize different market exposures. BlackRock is testing it with a handful of clients before introducing it to 130 clients later in the first quarter.

JPMorgan Chase is also increasingly using Microsoft AI’s products, The Information has previously reported. The bank’s global chief information officer, Lori Beer, said in February that JPMorgan was testing use of OpenAI’s GPT4 LLM and other open-source models on “a number of use cases.”

As the banks have ramped up their use of the new AI technology, they’ve had to decide whether to build the tech internally or buy from outside firms. Banks were early to adopt older generations of AI technology, and in those cases they built the systems in-house with large teams of developers, creating their own models. That was expensive and time-consuming, and the end result did not always work well, executives said. When it comes to generative AI, technology companies like OpenAI or Google can simply do it better, and paying to use their LLMs is much cheaper, they said. Building an LLM from scratch would be extremely costly and impractical for a bank, the executives said.

Banks have to tread a fine line with AI, balancing using the technology to become more efficient and responding to regulatory pressure. Policymakers have expressed increasing worry about AI use among financial firms, especially related to lending bias, automated credit denials, generating poor advice and creating new conflicts of interest.

“Financial institutions, as opposed to other industries, are very methodical—as they should be—in the way that they roll out new technology,” said Laura Kornhauser, a veteran of JPMorgan who is now CEO of Stratyfy, a startup that provides AI tools to help lenders improve risk management and underwriting. “That can frustrate technologists because there are longer cycles and things are slower to get implemented, but it’s because of the regulatory environment that financial institutions uniquely face. They need to do that. This is not a run-fast-and-break-things industry.”

Still, banks have found uses for AI-powered chatbots, particularly on the trading side of their businesses, as have quantitative trading firms like Jane Street.

Goldman Sachs, which began developing traditional AI tools in-house nearly a decade ago, is examining ways to expand generative AI uses across both of its major business divisions, said Argenti. He described the uses as covering a spectrum: The bank is already seeing great productivity gains among software developers and is testing the technology across operations where similar opportunities exist. It is in the early days of experimentation with more-sophisticated features, like performing complex information retrieval to support investment strategies, forecasts and pricing.

Goldman works with multiple AI providers, including Microsoft, OpenAI and Google. (The Information reported in its AI Agenda Newsletter in December that Goldman’s spending on Microsoft cloud products rose more than 20% in the second half of 2023, partly due to its development of AI models.)

Goldman only uses those firms’ LLMs for generic applications, like writing in familiar coding languages such as Java or creating workflow processes. But when it needs to perform more-proprietary tasks, it applies open-source LLMs to its internal code, Slang, in a secure fashion.

The bank can also connect out-of-the-box products to vector databases and open-source language models so they retrieve relevant information without any of it leaving the house, Argenti said. He emphasized that there will always be a human layer involved in any use case and that Goldman prioritizes safety and security above all else, but he believes generative AI has the potential to totally disrupt the way Wall Street does business.

Argenti described how Goldman’s AI use has evolved over time, combining older technologies with generative AI. About seven years ago, Goldman started testing traditional AI tools to create formulaic pitch books. The documents could change company names and such, but they could not construct anything like a sophisticated analysis of a client’s business problems and opportunities. Dealmakers would still have to spend hours and days refining whatever it spat out. Applying LLMs on top of the machine-learning tools Goldman developed could help bankers pull up relevant details much more quickly by asking bots detailed questions, in seconds instead of days, Argenti said.

“The complexity is unprecedented—that you can ask really complex questions and you get results right away,” he said.

Morgan Stanley, meanwhile, has given employees in its wealth management business access to a ChatGPT-like bot that answers questions about investment ideas and procedures. Although the bank still has to work out some kinks, the chatbot has reduced the average time it takes financial advisers to look up research reports and other similar information from 30 minutes to just a few seconds.

Morgan Stanley is testing another function that can transcribe meetings (only with clients who agree to recording beforehand). The tool then enters certain information into a customer relationship management database for future business and generates follow-up messages for advisers to review before sending them to clients. A small set of 200 advisers is testing that tool, so the bank can evaluate user experience, accuracy and compliance with policies and procedures before it moves to broader implementation, executives said.