Artificial Intelligence (AI) promises to enhance the efficiency and effectiveness of M&A commercial due diligence (CDD) processes. At Asteri, we’ve been exploring how to harness GenerativeAI (GenAI) on CDD projects. We will see Benefits Risks and Strategies for Success of using AI in Commercial Due Diligence.
The verdict? 5/10. Useful for some tasks.
Benefits of using AI in Commercial Due Diligence
Using AI in CDD can accelerate doing and analyzing primary research and secondary research. It’s not fit for quantitative analysis as far as we’re concerned.
- Primary research (8/10): The most useful use case we’ve identified is in primary research. Prompting a large language model (LLMs) like ChatGPT4 or Claude3 to analyze interview transcripts speeds up a labor-intensive task. Such work is done faster and better with the support of an LLM.
- Secondary Research (7/10): GenAI tools like Perplexity help speed up secondary research of market trends and the competitive peer group. With LLMs, a CDD team can start with a very robust hypothesis to test on Day 1. Prompting an LLM to sift through financial statements, market reports, customer reviews, and other relevant information sources can highlight patterns, trends, and anomalies that might otherwise go unnoticed. This can help due diligence teams to uncover hidden opportunities or potential red flags early in the process. AI can also speed up due diligence by automating routine tasks. For example, using natural language processing (NLP) techniques to extract key information from target company documents such as contracts and patents reduces the time and effort required for review. This frees up the CDD team to focus on higher-value activities.
- Quantitative analysis (0/10): Using AI for quantitative analysis in CDD is fraught with too much risk. We are not using it, not do we have any plans to.
Problems with using AI in CDD
Three key risks are hallucination, lack of transparency and incorrect training.
- Hallucination: Hallucination is when LLMs generate responses that contain false, misleading, or nonsensical information, but presents it as if it were factual. The rule of thumb is to never trust an AI. In our experience, the more precise the information you’re asking an LLM, the more prone you are to hallucination. For example, if you’re asking an LLM to build a product benchmark, at best the output is a hypothesis to test. The CDD team must verify any and all factual outputs. I verified the audit trail provided by Perplexity on a recent project and realized it had been feeding me BS.
- Lack of transparency: LLMs like ChatGPT and Claude are known as “black box” models because they operate as opaque systems, obscuring the logic behind their decisions. This makes validating output or identifying potential errors very difficult, even impossible. So forget LLMs if you’re looking to build a bottom up TAM model.
- Training: Training a machine learning model to predict future performance based on the target company’s historical data is attractive, but very hard to do in practice. The rule-of-thumb rule is that you need at least ten times as many data points as there are features in your dataset. For example, if your dataset has 10 columns or features, you should have at least 100 rows. That’s more data than the target company is generally willing to give you.
In conclusion, using AI in Commercial Due Diligence can speed up CDD market research if used thoughtfully. But, until commercial LLM improve the traceability of their logic, forget about using AI for quantitative analysis and forecasting. We certainly don’t feel comfortable using AI for that purpose.
Watch this space, because the GPT you’re using today is the worst you will every use.