Building an AI RCM Company: Requirements Then vs. Now
This article is the second of a 3️⃣ part series about my 4-year journey building an AI company in Revenue Cycle Management (RCM). The Series is broken into three parts -
1️⃣ The business of AI Revenue Cycle Management
2️⃣ Building an AI RCM Company: Requirements Then vs. Now → You are here.
3️⃣ Lessons Learnt 🎓
Back in 2019, my co-founder, Fisayo Ositelu, and I took a leap of faith 🚀, leaving our conventional jobs to initiate a healthcare AI venture with a fintech twist. The problem was evident: independent doctors often faced delays of up to 60 days ⏳ in receiving payments from insurance companies for their services. Our solution was to provide them with upfront payments (akin to cash advances) 💵 for a fee.
In addition, we offered revenue cycle services and collected our fees and original loan when the insurance company reimbursed them. To achieve this, we developed AI 🤖models capable of predicting claim prices, assessing the likelihood of denials and rejections, and identifying other risk factors.
We encountered several obstacles in executing this task effectively, much like many AI companies of that period. Some of these barriers included:
Cost 💸 - Building a well-functioning AI model was extremely expensive. Cost included:
Engineering lift: Assembling a team comprising Machine Learning experts, Data Scientists, and Software Engineers required a lot of upfront capital 💼
Data Acquisition 📊: Procuring high-quality and sometimes exclusive data sets often meant spending a significant amount of capital
Infrastructure Overhead: Training AI models, especially with massive data, was not only resource-intensive but occasionally led to situations where the costs outpaced the benefits, particularly when models underperformed.
AI reliability - Specialization over Generalization. To solve a specific problem well, your AI model often requires generalized knowledge to address issues that often happen when users interact with your solution.
Data training, labeling, and validation - AI models needed rigorous training to achieve desirable accuracy levels. Often, this entailed assembling large teams to vet results for precision manually.
These problems often compelled AI startups to rely heavily on manual offshore services to fill the performance gaps in their AI product, leaving the startups at the mercy of highly manual offshore services to plug product gaps until acceptable accuracy levels were reached. Most times, AI tools never reached these accuracy levels because of limited funds, leading to an overreliance on offshore vendors. The final result was a technology-enable solution with lower-than-expected Gross margins.
Fast forward to today, and many of these issues are no longer a problem.
Cost 💸 -
Engineering Overhead: Advancements like Generative AI and various Large Language models have drastically reduced costs. Now, businesses can deploy highly sophisticated AI tools with fewer engineers and by leveraging platforms like ChatGPT.
Data Accessibility: While procuring proprietary data remains challenging, expansive language models like ChatGPT, having "read" vast swaths of the internet, offer unparalleled knowledge and accuracy.
Infrastructure Overhead - Platforms like ChatGPT have also reduced the infrastructural costs associated with deploying and scaling applications
AI Reliability: The availability of numerous generalized models today enables businesses to create specialized models, fine-tuned with custom data, enhancing reliability considerably.
Data training, labeling, and validation - Contemporary Large Language Models can seamlessly interact with diverse data formats, providing labeling assistance with remarkable accuracy.
In summary, the dynamics of constructing an AI enterprise have undergone a radical transformation. The entry barriers have lowered, making it feasible to achieve significant improvements in gross margins. 📉

