Sharing evidence to inform the future of health care delivery and complex care: Lessons from the Camden Coalition and J-PAL North America partnership
In this second post of a two-part blog series on the Camden Core Model intervention, Aaron, Amy and Kathleen reflect on their key learnings from the evaluation and how study results will inform the future of the Camden Coalition’s work and the broader field of health care delivery and complex care. The first blog post explored the impetus behind the study and how a strong partnership between the researchers and practitioners paved the way for a successful evaluation.
What did you learn from this experience from actually conducting the evaluation now that it's at its close?
Aaron: We knew participating in the study was going to put a burden on our front-line staff. We learned a lot about how to support staff wellbeing and how to build a culture that puts a high emphasis on self and group reflection to help avoid burn-out.
Kathleen: Our hypothesis was that a short-term care management intervention would reduce readmission. Along the way, it became clearer to us that a short-term intervention focused on connecting patients back to the health system wasn’t going to be enough. Unravelling lifetimes, and sometimes generations, of social need and whole-scale community disinvestment takes much longer to do.
Amy: I’ve learned that the value of a good partner and a good partner relationship cannot be overstated. All kinds of unexpected things come up during an evaluation, and having a partner that is dedicated to the same mission and is clear on their goals and priorities is critical. It’s not just that they’re incredibly good at what they do. They’re also great to work with.
How will results from this evaluation inform the future of the Camden Core Model and what future areas of research are you exploring?
Kathleen: This is an organization that, from the beginning, was upfront about the fact that we wanted to adapt and change along the way. We understood that this was not just about medical complexity, but about the intersection of medical and social complexity. We are continuing to make those connections both in our care management work and our work with partners.
Aaron: We sought out the answer to one specific question very rigorously within this randomized evaluation, but there’s a lot of corresponding questions in the back of our minds as we have more time to work on it. Care management’s value proposition shouldn’t hang in the balance of just one outcome measure, so there’s a lot more work to be done studying additional measures that will give us a broader view of programmatic success. Given the heterogeneity of study participants, we also want to look more closely at the impact of intervention dosage and potential differential treatment effects across patient subgroups.
Kathleen: The fortuitous thing is that the study results are being published at a time when health systems are talking about social determinants in a way that is different than when the study started in 2014. We are spending a lot of time thinking about the connection, or lack of connection, to those social determinants. That is why it is critical that we continue to evaluate and learn.
How do you think this evaluation will contribute to the field of health care delivery and complex care?
Kathleen: The evaluation provides more information for the field as we think about program design, what’s needed to make these interventions possible, which populations could benefit most, and which populations need more than just a care management intervention. The evaluation informs what we've known for a long time—that health care alone can't fix these issues. We see these results as showing that we really need to work harder to break down the silos between the services our patients are getting and what they actually need to become healthier.
Amy: The main thing I hope this evaluation will contribute to the field of health care delivery in general, and the field of complex care in particular, is that we need hundreds of more Camden Coalitions—partners who are willing to be active learning organizations and work with us to embed rigorous evaluation into their ongoing standard business models. This study emphasizes the pitfalls that can occur with observational studies, particularly when you’re dealing with super-utilizers who are typically at the peak of their crisis when enrolled in the intervention, leading to a natural regression to the mean. Hopefully this will inspire other organizations to develop the data infrastructure needed to think about doing rigorous evaluation and consider partnering with J-PAL or other like-minded researchers.
Kathleen: For many patients of the Camden Coalition, Hospital utilization is only unnecessary and avoidable when there’s a substitute for those stays. If we want to reduce hospital use because it’s more expensive and not necessarily the care the patient’s needs, then we need greater investment in community-based alternatives that are effective and evidence-based. For example, almost two years into the randomized evaluation, we implemented a Housing First program because it was clear to us that we would never medically stabilize certain patients until their housing was stable.
Aaron: We successfully targeted and enrolled people with some of the most extreme utilization patterns and complex needs. The individuals we serve have accumulated a lifetime of complexity from personal adverse life experiences to dealing with all our society’s structural inequalities. While we observed some regression to the mean, this population continues to have high hospital use and we must acknowledge the difficulties inherent in changing these trajectories.
What advice would you give to other organizations considering an randomized evaluation?
Kathleen: Participating in an randomized evaluation is hard work, especially for a small organization. But we are extremely proud of our work, the partnership with J-PAL, and what the evaluation results are contributing to the field. We would have liked to have seen other results, but we are also not afraid of what we’ve learned. We are using the results to push ourselves, and we hope, to inform the field of complex care more broadly.
Aaron: It’s really critical that every organization that’s implementing a randomized evaluation, while they might not have a team of data scientists or analysts, have some degree of capacity to engage in quality improvement work.