Unless you’ve been living under a rock, you’ve probably heard some chatter about artificial intelligence, specifically the chatbot known as ChatGPT.
ChatGPT has become a phenomenon, attracting over 100 million users who have given it a whirl, posting their results and marveling at its ability.
What is ChatGPT? You can see for yourself, but it’s a simple interface consisting of a text box into which a user types questions or enters plain English commands. ChatGPT responds to these prompts in text answers meant to mimic the substance and mannerisms of a human being. ChatGPT is not sentient. Rather, it relies on enormous computing power to “predict” which text to present when prompted.
I signed up for the most recently released and powerful version of ChatGPT that’s available to the public, the subscription-based ChatGPT-4. Like others, I played around with it, peppering it with questions on a wide range of topics, trying to test the limits of its knowledge and logical agility. Before long, though, I turned to the subject that I focus on the most here at Morningstar—conducting securities research and running money.
I wondered how AI could change professional investing, and so put that question and others to ChatGPT-4 during the chat we had. In this article, I provide excerpts of some of the questions I asked ChatGPT-4, a paraphrased summary of its response to those questions, and what I took away from each exchange we had.
(Caveat: In the limited time I’ve used ChatGPT, I’ve been stunned by its capacity, versatility, and facility with language. It is already remarkably good, a rare example of a technology that almost instantly lives up to the hype. I might quibble with a few things below, but that shouldn’t obscure the bigger picture: AI is a game changer.)
My question: A shareholder asks you, a professional investor, whether artificial intelligence will make it harder or easier for you to uncover and exploit profitable investment opportunities. How do you respond?
ChatGPT’s response: ChatGPT thought it would be a wash. It ticked off the various ways in which machine-driven automation could unlock the ability to analyze vast amounts of data and detect patterns and anomalies that otherwise might escape notice. But it also rightly noted that widespread adoption of these methods would equip many investors to do the same, making it more difficult to capitalize on these insights. ChatGPT even went beyond what I asked, elaborating on the optimal way to incorporate artificial intelligence into an investing process (a mix of AI-driven tools and human intuition).
What stood out: ChatGPT’s take seemed quite reasonable. The information age has taught us that enabling technologies are a double-edged sword. They automate tasks and facilitate wider access to valuable data and insights but also arguably pit professional investors in a stiffer competition against each other, where their superior knowledge and skills are largely offset. This likely explains at least some of the challenges active investors have had in adding value versus their indexes in recent decades.
Where it fell short: On the other hand, the response felt slightly canned, like the well-chosen words of a public relations representative or the head of investor relations at a fund house. This is probably not a coincidence. ChatGPT is, after all, a predictive model, sifting available data to draw inferences on what text should logically follow. That information likely includes fund-company commentaries, consultant white papers, and other research that rather unsurprisingly concludes that AI will not change the fund industry as we know it. ChatGPT seems to put its own spin on that party line in its response.
My question: A shareholders asks you, a professional investor, why you believe that an approach that combines AI and fundamental analysis is superior to an approach that utilizes AI alone. This shareholder asks “Why don’t you just lay off all of your analysts, rely exclusively on AI, and cut your fees?” How do you respond?
ChatGPT’s response: Fear not, analysts. Your jobs are safe. For now. ChatGPT stuck to its guns, insisting that a hybrid approach leveraging both AI and human judgment was superior to either alone. It cited various factors in arguing for this dual approach, including “emotional intelligence and judgment,” which it said was essential to tasks like evaluating the quality of company management but was beyond an AI tool’s capabilities. Interestingly, ChatGPT also argued that human analysts’ creativity and adaptability better suited them to the task of steering a strategy through unprecedented market events or sudden economic shifts (the glut of closet indexers notwithstanding).
What stood out: It’s hard not to be impressed at how effortlessly ChatGPT tracked down the pertinent information and organized it into a cogent argument. For each factor ChatGPT cited, it examined the pros and cons associated with AI and human decision-making. For instance, in discussing “risk management,” ChatGPT noted that while AI boasts boast superior ability to conduct sweeping risk assessments, human analysts are still better equipped to prioritize management of those risks based on their likelihood and potential severity. This seems like an appropriately balanced analysis.
Where it fell short: Given how immaculately it collects and organizes its thoughts, ChatGPT has got to be the apple of every English teacher’s eye. But that careful neutrality can come at the expense of substance and sometimes seem clichéd, as when ChatGPT defended the professional investor’s fees by arguing its current approach “offers a more comprehensive and well-rounded investment strategy.” Where have I heard that before? Oh yeah, everywhere. Even when I prodded ChatGPT with a more-pointed question, it treated it like a book report.
My question: Is there evidence that convincingly demonstrates that “emotional intelligence” and “contextual understanding” enhance one’s investing results? Can you cite the relevant academic literature or empirical evidence that substantiates this?
ChatGPT’s response: ChatGPT listed four qualitative factors—behavioral finance, expert intuition and decision-making, corporate governance and firm performance, and ESG investing—to support its previous assertion that emotional intelligence and contextual understanding were essential to successful investing. It provided academic citations to back up each factor.
What stood out: ChatGPT excels at gopher-like tasks such as compiling a laundry list of references to academic papers or tracking down supporting evidence. It built a list of relevant citations and data points like it was nothing, making child’s play of what might ordinarily take a junior analyst or research assistant hours.
Where it fell short: Impressive as this display was, ChatGPT hedged its bets, qualifying its argument to say “more research is needed to establish a direct, causal link between these factors and investment performance.” This arguably undercut its earlier insistence that humans’ emotional intelligence and contextual understanding argue for their continued involvement in investment decisions. If those factors can’t be proved to benefit investment performance, then the question becomes, why incorporate them at all? That question hangs in the air a bit.
My question: What effect with the adoption of automated techniques like AI have on the prevalence and magnitude of security mispricings, where a stock or bond’s price fails to reflect its intrinsic value? Will mispricings become less widespread or smaller?
ChatGPT’s response: In typical fashion, ChatGPT responded with a list! (ChatGPT loves a good list.) In this case, it ticked off the ways in which widespread adoption of AI could impact security prices, laying out arguments both in favor of and against the notion that mispricings would become scarcer or smaller. It’s a circumspect response that seeks a middle ground (another ChatGPT tendency).
What stood out: Of all ChatGPT’s responses, this one probably impressed me the most. Not because it was any less structured than its other answers, but because it demonstrated a strong command of the key factors and the interplay between them. For instance, it makes the reasonable assertion that as investors acquire the ability to clean and process data at even greater scale, security prices will come to impound even more information more quickly than before, boosting “efficiency.” But it also points out that that brings with it a potential downside: Increasingly homogeneous inputs leading investors to similar conclusions and, thus, herding, with the attendant risk that security prices will get badly out of whack.
Where it fell short: I do wish that ChatGPT would take more of a stand, though there’s a certain wisdom in saying “it depends.” After all, us humans don’t know the answer to the question I posed, either, and so it’s probably unfair to expect a text-prediction bot to pretend otherwise.
My question: You run a $2.5 billion open-end mutual fund. It invests in the stocks of 30 or so small companies. A prospective investor in your fund wants reassurance that you will stick to your strategy of investing in the stocks of 30 or so smaller companies, not stray from that approach. To that end, he wishes to know how you estimate your fund’s capacity and at what point you will close the fund to new investors. Please respond to this prospect by stating the fund’s capacity and the factors that you considered in arriving at that estimate.
ChatGPT’s response: To my surprise, ChatGPT went there: It offered an actual dollar estimate of the hypothetical fund’s capacity—$4 billion. Huzzah! It also laid out the salient factors it considered in arriving at that estimate. On balance, it is a thorough and well-considered list.
What stood out: ChatGPT showed an independent streak! Many open-end fund companies are loath to put a dollar figure on capacity and, of those, fewer still can provide a lucid account of how they arrived at that number. Capacity is the fund industry equivalent of “we know when to say when.” So, it was nice to see ChatGPT break that habit in putting a hard number on the hypo fund’s capacity.
Where it fell short: Why $4 billion? ChatGPT does an impressive job of running through the different factors that informed its estimate but doesn’t take the next steps and drill down into detail on any of them. For instance, was $4 billion the point at which market impact would begin to seriously erode the manager’s ability to add value? Or would it have found the fund owning large, unworkable stakes in certain names? Without specific prompts, ChatGPT lets those questions hang in the air.