Here’s the question:
What would your question be of a practice based primary care research network?
Consider that you are involved in establishing some of the first research questions that would be answered by a group of engaged, networked, and interested primary care practices – what would those questions be?
For this thought experiment, you could be a funder or a participant (clinician or patient) or an outside researcher.
Assume the network could collect whatever kinds of data you would need to answer your question.
Assume, also, for your quantitative types, that the network has 80 primary care providers (mostly family doctors, some nurse practictioners) across multiple sites both rural and urban and that these were full service primary care practices. Amazingly, all patients consent to participate and there are 120,000 patients. All practices are using an EMR and data from the charts would be encoded to support your question(s).
I will start with one of my questions:
What is the impact on overall capacity* of practices where patients with mental illness (mood disorders such as depression and anxiety) are given a proactive program and the tools to self manage their condition through a Personal Health Record (PHR)? Is there a difference if the PHR is integrated with their primary care provider’s practice EMR? Does self management also change quality of care (perceived and objective) for the patients involved?
* capacity should be examined both in terms of financial cost to the practice to run the program and changes in number of patients seen by providers over time compared to matched controls.
If this is interesting to you, add your own question as a comment or join the discussion by supporting / adding to other questions.
My goal here is to collect the types of questions people want answered not to focus on how to answer them (that comes later).
Amongst my work friends, we like to “fail fast” – that is to put up something for feedback quickly so that it can rapidly evolve with group input rather than polishing something on your own until you think it is “done” only to find out your second step led you in a direction you didn’t need to go.
So here is an early version of a goal map for a PCRN – this is based on the i* notation and does take a few sentences to describe. First, there are actors, represented by circles. These actors have goals, the pill shaped icons. Actors in a network are typically dependent on others to achieve certain goals. The arrows represent the dependencies. You can read it like this:
“ACTOR X wants to achieve GOAL A and is dependent on ACTOR B to achieve GOAL A”
or less abstractly:
“A PATIENT wants to have GOOD QUALITY CARE and is dependent on a FAMILY DOCTOR to receive GOOD QUALITY CARE.”
Hopefully that helps understand the image below (click for a larger version) and do provide comments – I would like to use this post to solicit public feedback and revise the map to help support our local design thinking.
(click for larger version)
There is an expectation that as we develop and use electronic systems (e.g. EMRs, PHRs etc) that this “will yield a wealth of available data that we can exploit meaningfully to strengthen knowledge building and evidence creation, and ultimately improve clinical and preventive care.”
Leveraging data that is in EMRs makes sense for a PCRN at this point in time. The question of this point is would a PCRN rely only on data that is routinely collected as part of care vs. using both data that is routinely collected as well as specific information that is collected for a given study.
Paper networks would send out specific questionnaires or data collection forms. These could be easily “integrated” into the paper chart (by photocopying). As long as the pages were 8 1/2″ x 11″ (in North America at least), then they fit all the standards they needed to fit.
In the EMR world, it is more complex.
Here are some pros and cons to each approach:
Routinely collected data
- May be more variable (I may document less than you routinely or I may be in the habit of documenting elements as text rather than using existing templates
- May be structured differently between EMRs
- Could have evolved over time
- May not capture what a study needs to know
- Typically is not sufficient for interventional studies
- May have ethical considerations if retrospective data is used for research.
Specifically Collected Data
- Requires more of the EMRs in the network to be able to at the least provide specific templates to record additional data
- Cost could be more if EMR vendors are required to build in custom features to handle the display of these study questions.
- Data models for the study may conflict with the data model in the EMR (e.g. smoking — the study might require a detailed description in pack years where as the EMR might have taken a different approach. How are these data reconciled in the EMR?)
- Requires more from the end user — they have to be keen enough to record some amount of extra data. While it might be minimal, even minimal can add up.
- Requires a change in work flow (e.g. the clinician now documents additional information, may have to ask different questions, etc.)
- The data is better suited to the study question
- The data is potentially more consistent across EMRs
- It is easier to assess data quality for specific data
- Requires more of the EMRs in the network to be able to at the least provide specific templates to record additional data
There are several things that need to be worked out if one wants to start collecting specific study data. It is an important part of the overall design of a network and depends on the kinds of questions the network wants to ask.
In BC there has been a lot of discussion about setting up an EMR-based primary care research network (PCRN). Or rather, there are talks about different types of research networks. Some existing in other provinces and some being considered to be built in BC.
This will likely be the first in a few posts about PCRNs. Today, types of EMR-based PCRNs.
Types of EMR-PCRNs
1. Contained within the EMR
This is probably the simplest model. All providers are on the same EMR instance and can run queries in their database. Depending on the size of the practice or if they are using a distributed EMR (e.g. multiple practices sharing the same EMR), this can work just fine. With some products, this could even scale up to 100s of clinicians on a single, enterprise wide system. It does require that everyone would be on the same EMR and that someone in the group could run the appropriate queries on the database. It does not require a level of harmonization of the data models as there is only one model.
Data can be analyzed within the EMR and then tools like Excel can help the clinician / researchers set up the reports for reporting / publication (if the EMR does not do what’s needed).
2. Patient Data exported to Central Repository
Here, specific data is routinely (or automatically) extracted from the EMRs involved. The data is at the patient level, although typically de-identified to some degree. A central server collects the data and the queries are completed on the de-identified data. Depending on how, or if, data is de-identified, it could be linked to other data.
CPSSCN in Canada uses this model, although there is an additional layer as de-identified data is aggregated at two levels. CIHI has a Voluntary Reporting System (VRS) that is similar.
Options 1 and 2 are two ends along a spectrum. Between these are some interesting hybrid options.
3. Exported EMR Reports
In BC, there have been discussions around how can one engage multiple groups on different EMRs to engage in a variety of research projects without collecting ANY personal health information (identified, de-identified or otherwise) into a PCRN. Paper practices does this with various tally sheets and anonymized data collection forms. In EMRs, one could develop standardized reports that run in the EMR that generate practice or provider-level information that can then be shared. The reports could be manually shared or (more rationally in the electronic world) developed into a standard format (e.g. XML) that can then auto-populate a central server. The central server is without any patient data, even de-identified data. Instead it has data at the provider / practice level.
The role of the central server can be more on presenting data back to members of the network.
The issue with this model is one of practical scalability and flexibility. The network is directly dependent on each EMR being able to create new queries in time to answer each question. True, some strong products with advanced users could build these queries without direct programmer involvement, but there will likely be some products and some questions that need more technical work. Then there are issues. There are also issues when complex queries slow down an EMR in a large practice. This brings me to the fourth option.
4. The “Edge Server” (aka the Distributed Data Warehouse)
Another way, one which there have been several conversations about is to use a kind of “edge server” that sits at the edge of the clinicians network and are designed to support querying without exposing their data. “Edge server” is a term that can mean many things. In this context, I mean to describe a server that sits on the edge of the clinician’s network and syncs patient data with the EMR on one side and exposes a query engine to the central PCRN server on the other side. The edge servers do not expose patient information to the central server, but they respond with answers to questions. Think of it like a distributed data warehouse (if it isn’t too much of an oxymoron).
The edge server contains a copy of (select) EMR data that has been transformed into structure that (a) is consistent as possible across EMRs and (b) is designed for complex queries. For argument sake, let’s say there is one edge server / practice EMR (and a practice could have many doctors). The PCRN has a single central server that manages the nodes (including security, pushing out updates, pushing out queries, etc.), hub and spoke style.
The interface between the EMR and the Edge Server should be fairly static. By keeping this interface static, the EMR vendors won’t need to be regularly asked to make adjustments. Keeping the components loosely coupled would be preferred (sharing data through a set of CDA or plain XML messages). This way EMRs can be brought on board with a focused effort and less maintenance. To be static, there needs to be a good, up front understanding of the majority of questions that would be asked within the network.
The query engine would allow for the PCRN central server to pose a single question that would then be distributed to all the edge server nodes in the network. The edge servers then query their data stores and can report back answers to potentially very complex queries in near real time back to the central server without exposing patient data.
This is the simple Edge Server model. Additional features that could be considered include:
- Selective Participation – where the members of the PCRN can be selective as to which studies they want to participate in. This is less important for a transparent network and more important if there are features in the PCRN like Realtime Recruitment and Additional Data Collection.
- Realtime Recruitment – for some studies, patients would need to be recruited in realtime (e.g. when they arrive, are checked in, or are seen by a physician). Exposing another service (between the EMR and the Edge Server) would allow the Edge Server to do an initial screening of patients for more prospective type studies. The Edge Server could then return a recruitment alert complete with consent forms.
- Additional Data Collection – for some studies, there would be a need to collect additional data that may not be either routinely collected or collected in a manner than can be consistently processed (e.g. free text). With some careful design work, this could be accomplished. The key issue here is ensuring that the data collected are stored in the patient’s EMR and not just on the edge server (the edge server is not a clinical source of truth). This does mean that there is a greater and greater interdependency between EMRs and the Edge Server in terms of data design.
- Knowledge Translation Service – finally, as knowledge is generated, it should come back to practice. By re-tooling the realtime recruitment service to a KT service, then information can be shared back to clinicians on applicable evidence for patients at the point of care. This can also be important if the findings from a study require that the practices search for specific patients meeting specific criteria (e.g. a recall on a drug in a specific population). Both the KT engine and the recruitment engine are both forms of CDSS and likely could leverage a large number of shared components.
The Edge Server concept has captured the imagination of several of the researchers and faculty in BC.
Jess McMullin has a good slide deck where he describes (slide 9) five levels of Design Maturity. (1)
Those levels are (paraphrased):
- Default – Status quo determines design.
- Style – Changes to look and feel
- Function – Design improves use
- Problem Solving – Seeks current problems and changes
- Framing – Redefinition of the problem itself.
This is a good list to remember in healthcare.
The potential for improvement (and some types of risk) increase as you move from default to framing. Also, it is harder for users to conceptualize the changes as you move through. It’s easy for people to visualize “we are going to put this paper form on the computer”. It’s harder for them to consistently visualize “where we’re going you won’t need to document”, or large lists of requirements… As you move along the levels of design you need to rely on more iterative and visual tools to support shared and common understandings of the changes that are being considered.
1. I found a similarity to a list of maturity for Business Analysts that was on Better Projects. If you are a BA or work with BA, think about where you fit in this list of maturity for the various kinds of activities / projects you work on.
I have been known to say that falling down is part of learning and it does not mean failure.
Sometimes you fall down as you try to do things you haven’t done before. It is part of the process of growing. But if you fall, you need to get back up, as Leo from Zen Habits has said:
Sometimes, I don’t follow my own advice. I’m not perfect. I fall, but I try to get back up. And that’s what matters — not the falling, but the getting back up.
I have fallen many times along my way. I have tried to get back up as quickly as possible and reflected on what I had learned for each fall. Sometimes it took a while, but it has gotten a bit easier along the way.
My little boy has been teaching me this – acutely – over the last month as he has become a toddler. As hard as it is to see him fall, fall he does. Sometimes hard. Harder than any parent wants to see / hear. But he seems to know that it’s OK. Sure he gets bumped and hurt, but it does not stop him from getting back up.
And here’s the happy boy, with a big bandaid from one of his most dramatic falls.
By the way, he’s walking everywhere now and loving what he can do with his new found skills. I hope he keeps getting back up with the same vigour he shows now. I’ll do my best to help make the falls soft, as I know I won’t be able to prevent them all.
In my continued learnings related to work, motivation, and change – all of which are part of this year’s learning activities, I came across this little blog post at Harvard Business Review on the importance of friends and how they impact work.
Here’s the quote that struck me:
Once you’re on the job, having a best friend at work is a strong predictor of success. People might define “best” loosely (think of this as kindergarten where you can have more than one “best” friend), but according to a Gallup Organization study of more than 5 million workers over 35, 56% of the people who say they have a best friend at work are engaged, productive, and successful while only 8% of the ones who don’t are.
Over the last twenty years – in what are really three different careers – I have been lucky to have many best friends at work. Indeed, I have often thought about how important it is to have them and have them as part of a team in order to get to the real work that needs to be completed.
I wanted to thank you, friends, and you know who you are, for being there and helping me engage in each of the major projects I have had the pleasure to work on. I could not have done these things without you.
I have, on occasion, thought about how one can apply agile software develop methods to research projects and other non-IT projects. I have been thinking about how our groups and projects could benefit from some of the agile methods. This seemed like a good enough trigger for me to write about scrum and research.
Craig Brown (on Better Projects) lists the elements of scrum as follows:
- Start with a goal. Break down the goal into incremental steps.
- Discuss the steps with the team who needs to deliver the solution.
- Set standard time boxes. Do your best to deliver something practical and useful each time boxed iteration.
- The team needs to take instruction from the customer at the beginning of each iteration and report on what got ‘done’ at the end of each iteration.
- The team must set aside a portion of time at the beginning of each iteration to plan their work.
- The team needs to set aside a regular and brief portion of each day to communicate progress and problems to one another.
- The team needs to commit to continuous improvement and should set aside a portion of time each iteration to reflect on what went well, not well and where they can improve.
- The planning, review and reflection sessions, and the daily team update all need to be set at regular times to help the team achieve a sense of rhythm.
Let’s go through the eight elements, applying a research lens. I will be thinking about some of the evaluation of health information systems, rather than
- Goal: Check. Research projects should have a clear question in the beginning. There’s a need to define a method, which really could be broken down into incremental steps.
- Determine who is responsible for which work products: Check.
- Research projects can be time boxed. The question of delivery is interesting and it depends on the methods. Some research protocols, such as randomized control trials do not lend themselves to using agile methods. They are more “waterfall” in their approach, if you will. You won’t know results until you have a large enough number of research subjects (although the set up activities can easily be time boxed). Other methods are more suited to thinking about them with agile methods.
- Instruction from the customer: for some studies the “customer” is not explicit. Is it the granting agency? Is it the end user? The more engaged we as researchers are with our customer, the more likely the knowledge gained is knowledge applied. So we can benefit from defining a customer who is engaged.
- Planning Time: Yes, this can be easily done. I would add, especially when working with students, that this is key and ties into #6 & #7.
- Daily Progress Report: Typically, research projects are run more autonomously. One of the challenges we have of adapting scrum and the importance of daily team meetings is simply the timelines for some studies. There are often long pauses (ethics review) where things are on pause until approval is received. Does this require redesigning the research team structure (e.g. have an ethics rep on the team) or does it mean that scrum activities are bound to periods of higher activity? I am not sure.
- Continuous Improvement: For all of us this is important. For students who are on such a trajectory of learning and growth, even more so. Regular reflection helps improve skills faster than just doing blindly. Here we can learn to better plan, assess our capacity for work, and highlight “hard parts” of projects. This helps the current project as well as future projects. I am a big supporter of reflection, so this is easy.
- Regular times led to productive rhythms.
In truth, many of the action oriented approaches are similar. Soft Systems Methodology also recommends iterations and reflections.
I think it is time to start more formally bringing in some of these ideas into the research plans that I am building now.