9Questions — Phil Raciti, Bardin Hill — Promises and problems of AI
- Nicolle Liu
9Questions is our Q&A series featuring key decision-makers in the corporate credit markets — get in touch if you know who we should be talking to!
At 9fin, we like to think we know a thing or two about artificial intelligence. We and our clients use it on a daily basis: it’s already embedded across our platform, used for everything from transcribing and summarizing earnings calls to comparing covenant packages.
We’ve also written extensively about how it stands to change certain aspects of the capital markets. Philip Raciti, head of performing credit at Bardin Hill Investment Partners, is similarly fascinated by this technology, so we jumped at the chance to get his perspective.
At Bardin Hill — which, in total, manages some $5bn of assets — he and his team have been using AI for things like ‘sentiment scoring’ as part of their credit analysis process.
Phil shared his insights around the potential of using AI in leveraged credit investing, and some of the philosophical, ethical and operational challenges the technology poses.
1. How is AI currently being used in the world of leveraged credit?
While marketing spins have somewhat exaggerated the extent to which AI is being implemented within leveraged credit, we are beginning to see efficiencies driven by the technology.
Most AI utilizations are provided by third party vendors, and are being used by firms to enhance research functions, security management, legal documentation review capabilities, and machine learning portfolio strategies.
2. What do you see as this technology’s primary benefit in your industry?
I believe investors will be able to use AI to increase efficiency, drive lower-cost strategies, and analyze large sets of unstructured data.
While traditional software has made some progress in these areas, there is room for AI to drive efficiency and data access for investors, by cutting through differentiated delivery mechanisms and inconsistencies between data providers.
At the very least, investors should benefit from faster decision-making as more data becomes accessible — as a result of the processing capabilities of Large Language Models (LLMs) and enhanced AI strategies.
3. Can you share some examples of how credit fund managers are using AI to improve decision-making and risk assessment?
We’ve seen some credit managers utilizing portfolio optimization tools with target-based drivers, but we view the technology as enhanced algorithms rather than a form of AI.
These tools have the ability to enable faster decision-making when analyzing known data, but generally tend to be limited in deeper risk assessment and risk decisions overall. At Bardin Hill, we have started a program to implement enhanced risk tools utilizing LLMs which, at a minimum, could drive efficiencies in investment processes, and potentially over time help to identify hidden risks and correlated trends.
Our current development is focused on enhancing our analysis of sentiment scoring, coupled with larger data sets. As a baseline, we utilize a backward-looking assessment of how credit situations develop over time by analyzing both internally produced and externally provided natural language. Ideally, we hope to better understand credit predictive outcomes by combining situational sentiment data with other data feeds.
We have also developed summarization tools that are able to quickly distill pertinent information, which we use to summarize daily activities across multiple characteristics including all entries for a specific analyst, industry, and rating category, among others.
Lastly, we have incorporated analysis from LLMs as an early part of our credit-check process, which we determine by asking the LLMs a series of targeted questions for each of our processes.
4. What are the potential risks associated with this kind of technology in credit, especially as it relates to data quality?
This industry is still in the early stages of implementing AI, and as we have seen with the advent of most technologies, certain risks have to be addressed.
While the currently available tools are quite impressive in terms of their natural responses to inquiries, we have seen erroneous output under certain prompt conditions. As an efficiency tool, AI can be harnessed to provide significant value for an organization by way of consistent and reliable data. As a research tool, it is wise to proceed with caution.
Privacy is another real concern, as there remains a risk that proprietary data could be released to external parties, and the potential exists for private entity data to be included in results provided by AI.
5. What challenges do credit fund managers face in terms of regulatory compliance when using AI?
It’s a timely question. To this point, the focus around compliance has been on utilizing generative AI to help institutions comply with regulations, including by distilling complex regulatory documents for ease of review.
Given that regulation of AI is in the early stages, and recognizing that it is critical to support the development of AI tools that consumers can trust, it is more necessary than ever to keep up with the relevant emerging policies and laws. In our industry, I expect that internal compliance functions are focused on public and private data considerations and intellectual property rights with respect to data feeds and news services
6. Do you think AI could replace credit analysts?
Should AI replace credit analysts? Definitively, no. Human judgement and the ability to assess a more wholesome suite of dynamics will likely remain valuable for the foreseeable future. The workplace would also feel empty, especially when the power goes out. Ideally, AI will complement the credit analysis process — and should enable analysts to access more insight, more efficiently.
Could AI replace credit analysts? To some extent and over time, sure, but I’m not convinced that leaving all analytics to machines would be smart. If we do get to that stage, I imagine reporters and portfolio managers will not be the ones making that decision. When that happens, we can all pool our universal basic income and celebrate.
7. So many companies are talking about artificial intelligence. What’s the most interesting (or ridiculous) pitch you have even seen?
I should caveat my answer by saying that, because LLMs are in their infancy and just beginning to be adopted, winners and losers are still far from certain.
One example involves companies that interface with humans via call centers. Replacement of call-center personnel with an advanced LLM could create significant margin uplift as a result of lower headcount and higher customer servicing capabilities. That said, the provider of the services may yet still be disintermediated completely by the end customer or a competitor who is faster to implement an AI solution. Business inertia will make quick change unlikely, but once the writing is on the wall, valuations will follow.
Zillow’s failed home-buying experiment, leveraging algorithm-based purchasing, and Tesla’s full self-driving software delays, highlight the ways algorithms sometimes fail to account for every scenario, even with the best of intentions or considerations in development.
8. How do you see this technology evolving in credit over the next 5-10 years?
Over the next five years, I expect AI to enable enhanced portfolio creation tools, scenario analysis, and deeper credit analysis. Firms that don’t adopt and leverage AI likely will be left behind, while those who do may grapple with challenges of index-like behavior in credit positioning due to having similar access to information to competitors. Firms may be forced to find the right AI integration balance, as well as ways to find data edges where they can.
Looking beyond five years, the possibilities are varied and depend on the speed at which generative AI evolves. Of course, this ironically makes it difficult for anyone to make predictions. The AI tools we expect to have at our disposal in the near-term may provide some clarity, but will likely create more questions than answers.
9. What’s your favorite AI-generated error so far?
If errors in judgement count here, one of my favorite AI mishaps occurred a few months back when a livestream had to be taken down because the AI that was driving it aired some controversial dialogue.
The project, called ‘Nothing, Forever’ was a procedurally generated animated sitcom based on the television series Seinfeld. The show was akin to a run-on episode of Seinfeld, with stand-up routines, and all the voices and dialogue were AI-generated, with users interacting in chat rooms who could influence the dialogue model.
Eventually, as will happen with human textual communication leveraged as a foundation, you are going to trip up, and ‘Nothing, Forever’ was no exception.