Archive for the ‘Software’ Category
I have been struggling recently between “using methods” to reach success and having to use “The Method”.
As organizations grow, there seems to be a tendency to standardize on The Methods. PMOs can often come up with “The Meth”, consulting firms will sell you their Method (either through hiring their consultants or directly). The Method provides standardized assessments and processes. You can make comparisons (useful for research / evaluation). You can scale up nicely by having everyone do the same thing.
Students learning and wanting to be successful want The Method. Something concrete to follow that will guarantee the end product is an “A”. Something that can be memorized and provides a level of safety in knowledge. I see this with medical students / residents as well as informatics / IT students.
It is also easier to teach about The Method. It is defined and discrete. 10 steps, 5 minutes / step = one, 50 minute lecture. Done, you are certified!
However, people with experience that have developed their skills use methods, not The Method. They have an approach and a toolkit. In complex problems and complex situations they reach for the tools that they think will work and, while using them, assess their fit and course correct. Their approach supports communication with others, their detailed actions change based on their understanding of the problem.
This is harder to teach, especially in 50 minute lecture blocks. It is easier to model with students in practice. Residents can learn this by watching and modelling their preceptors. informatics students can learn this (if they are lucky) from Co-Op terms. We can all learn this by reflecting, regularly, on what we do and why.
There is value in standardizing and having processes, definitely. They help us (a) reach common ground across team members and team and they (b) can cover our blind spots. For routine problems (complicated and simple, not complex), using the well tested and validated Method is better. Surgical outcomes benefit from using The Method, for example.
But they can also cause blind spots, if The Method is a poor fit or poorly applied. This is particularly true for complex problems, I feel.
With complex problems, it is impossible to know if a rigid method is a good fit until you are in the middle of it. Complex problems are, by their nature, unpredictable. So it is better to have a flexible, reflecting approach to these complex problems. Use aspects of your methods to help anchor you, as ways of reaching common understandings amongst team members and stakeholders, and then reach into your toolkit as needed when one method does not fit.
NOTE: This post is a follow up from the overall post on what does a clinical architect need to know.
Usability of systems in an important issue. Although it is not one that is first thought of when one thinks of architecture, which is a shame. User Centredness really should be a large part of what a Clinical Architect considers during design.
Of course, detailed user centred design work is not something that the clinical architect can do single handedly, especially in large organizations. Keeping the mantra in the forefront is important to making workable systems and that is something the Clinical Architect should do.
I think about user centredness at a few levels:
- The single user interacting with one information system
- How do the screens flow, does that support the work, is the right information where it is needed, are movements from keyboard to mouse and back streamlined, etc.
- The single user interacting with systemS (plural) or the greater system -
- Where does a user need to go to get information, what does their day look like, etc. Are they interfacing witn 3 systems to do one job, what are the greater outputs, are they hand modifying those outputs and why.
- The multi-user system -
- How does the CIS impact provider – patient interactions and how does it impact provider-provider interactions? What intentional changes are occurring and what UNintentional changes are occurring (or could occur) with the implementation.
Together these views can give an Architect a good view into how the systems work as a whole for a user in their day to day work. Typically, one would consider
I’ve written about the bio-psycho-social approach to usability before and it is a useful framework to consider usability as well as user centred design.
In healthcare, there is also the idea of being patient centred as well. This is an extremely important perspective to consider. My recent research has shown how fragmented a patient’s care is and how they information can be scattered across literally dozens of records (see broken records).
As a final note, here is a recently ISO / IEC 62366 summary from User Focus that discusses usability of medical devices.
Nine years ago I started a little research / development project on the palm PDA called Palm Prevention. This was my resident research project in family medicine and I eventually did a “pilot” study (forgive the pun) and was published. In the nineties I was very interested in PDAs in healthcare and had several projects looking at clinical education, decision making, access to reference materials, and creating tools that took simple context into account.
Palm Prevention was a quick, patient specific screening tool that essentially took 50 or so evidence-based clinical practice guidelines and presented them to the user, filtered based on a few key criteria that fit on a single palm OS screen. Here are a couple of screen shots. The first screen is the start screen where the user provided a few key elements of patient history. The second is the filtered list of guidelines ranked in order based on evidence level (A being the strongest for). From there, tapping on any line brought you details of that guideline.
I released it free on the Internet PDAGuidelines.com. (now deunct, but had several of my projects on it)
Today, I was on the AHRQ site and rediscovered their ePSS. Using the USPSTF guidelines, that have a similar approach and a tool that is available on multiple platforms, including the iPhone:
More slick GUI thanks to the more advanced platform, but similar approach to what I was working on nine years ago. I do not have anything to do with the AHRQ or their tool, but I am happy to see that the idea is still alive and people are finding it useful enough to have a very similar design 9 years later.
Yesterday I was lucky enough to be invited to give the opening Keynote for the Software Engineering in Health Care Workshop at ICSE 2009 and spend the whole day with a very thoughtful group of software engineers from around the world as we discussed issues related to designing software for healthcare. It was a very refreshing conversation with a slightly different perspective from the group. Some interesting activities and good people.
One of the topics that came back through the day was the issue of leveraging the context of data. This seemed to resonate in our discussions as a way to enhance current systems in new ways. The challenge is to define what those context could be and how they would support activities. The 5 W’s and 1 H are all important (who, what, when, where, why, and how). I’ve illustrated a few more specific elements in the diagram, but there are certainly more. Also important to consider which context we are talking about. So far, there are at least two distinct contexts that need to be considered:
- Point of Capture – where the datum was documented. The context of that point in time is obviously important.
- Point of (re)use – where the datum is being accessed. This might be future point of care activities, or it might be point of reflection activities (such as quality improvement or health planning, etc).
Model driven design and the overall socio-technical complexity of healthcare were also two additional resonating themes for me today. The challenge of the combination of these two (and our relative rates of failures of systems in healthcare in general) does lead one to look for new methodologies for system design and implementation. More explicit modeling of context into systems to provide more reusable information (as opposed to data) might be part of the answer.
A great workshop and I wished I could stay for the two days.
It is interesting how paper-trained we are. It is often hard for clinicians to think about how to design EHR systems – particularly documentation – in a way that breaks the locality of information and the paper-bound thinking of forms and move to information. I see a lot of systems out there that promote having “digital forms” that are direct copies of paper forms — including having forms that do not fit on a single screen (because they are 8 1/2 x 11 format instead of screen shaped) and “page turning” that corresponds to the pages of the form not the GUI design of the computer.
You can see how this thinking works, when clinicians request that certain forms are available on the computer. Building forms to match paper is often the quickest technical solution and one that, sadly, get’s an easy “check” from users as they can compare the form to the computer screen for “accuracy”. Without thinking too far along the path, you can see how things get developed. Quickly scanning in the blank form as a PDF to create a background that REALLY matches the form and then adding fields on top to add text. For pizzazz add some auto-populating demographics and BOOM! It works even better than paper… Four hundred forms later and you have the electronic paper record.
The forms are important – as they are a way we consistently communicate with various groups on paper and THEY have benefits of having standard forms, just like standardized electronic user interfaces improve efficiency and safety, so do standard paper forms. But the benefit is for the end consumer, not necessarily for the clinician entering the data into the form.
Often there are better ways to design systems to support a user’s workflow while supporting the required output. There are examples of how to do this – building data capture to support clinical workflow. Clinical Decision Support (CDS) can be used to ensure that the right information is captured. Reports can then be generated to print out the appropriate forms as needed. Multiple forms would use the same data and the clinician would not have to jump around re-populating different “standard” forms with multiple pages that scroll off the screen.
The tricky part is, of course, being able to capture the data in an efficient manner that provides sufficient semantics that allows the computer to translate your documentation into the various unstandardized tick boxes for concepts developed for specific forms, something that works for CDS, and is something a clinician will tolerate.
And that takes more work and a deeper understanding of the types of knowledge that are needed without the limitations of paper.
Of course, health information systems are not the only systems that have been built from their predecessors — that is how we evolve many things. Web “pages”, for example… oh and there there were trains that evolved from horse and carriages.
We had our second Engineering 4 Health Challenge at UVic yesterday and it was another success! Some great students who participated and some really fantastic ideas that were generated. The topic for this challenge was the same — use the OLPC (One laptop per Child) as the design platform for creating health applications for students in developing countries. One project was focusing on engaging the whole family in their health through the OLPC and the other was a health oriented game that provided health education in the form of game challenges. Really interesting approaches.
The paper storyboarding design for the event seems to be quite manageable and has generated some good results. We managed to squeeze it into a 1/2 day.
We started by having a group brainstorming session – timed, with two facilitators. Facilitators helped clarify ideas from the participants and encouraged students to speak out their ideas, often using one initial idea “build a game” to create several specific ideas about games. On of our facilitators (not me!) started concept mapping ideas, to show the linkages.
Students were then broken into small groups and encouraged to choose and idea. The small groups (4-5 students plus 2-3 facilitators) often found as they selected ideas, they not only drew out more detail, but some also merged several ideas into one package.
The next step for the students was to begin to work out the details of the design and a high level flow. We did this with the students through paper prototyping and pasting together a high level storyboard on 4′x6′ paper. We used paper mock-ups of the OLPC laptops (below) so the students could draw their rough screen sketches on them and describe some of the functional activities on the pages. This really helped quickly make ideas real and also was accessible to students — some focused more on GUI design and others more on functional description.
All the individual pictures were placed on the paper with arrows used to denote typical screen flows for users. Not everything was on the storyboard, obviously. Many of the ideas they had were quite complex and would require a fair amount of content, but the pages really did give a good idea about how the systems might work, following along a specific scenario or giving an overview of the path of a game.
At the end of the morning, each group was able to present their idea to the rest of the students.
I definitely enjoyed this project and wanted to thank all the students, volunteers, faculty, staff and teachers who made this happen.
I have been working with a friend and colleague over the past month sketching out an idea to develop software for the XO laptop, which is part of the one laptop per child
(OLPC) project. The idea is more about how to get others to design and build software for OLPC and we can help facilitate.
We are exploring how to engage students in BC to design and develop health and health education materials with partner communities in developing countries who are part of the OLPC. It is an exciting idea to get students, both high school and university students, to get together and learn about computer science and about healthcare while flexing their creative design muscles in coming up with tools to help children thousands of miles away.
Seems like we are not the only group who has thought of this, of course. There are several projects proposed and in development through the OLPC and can be found on the OLPC Wiki.
We are piloting our OLPC-Health Design Fest this month – it’s a half day paper prototyping event. I am very excited to see how it works.
The title of an article from Harvard Business Review keeps coming up Breakthrough Thinking from Inside the Box. While certainly not the first place to use the play on “out of the box” thinking, it is a good construct.
I read this many months ago and do find the idea pops into my head whenever I am in a meeting that stalls. Often these are my own meetings, where I realize that I haven’t provided enough structure to promote creativity.
Having a limit or constraint to work with provides a foil for creativity and this article does a good job of providing some examples that can be used. The full article is available for purchase but the 21 question sidebar is accessible, I believe.
Pulling people out of their comfort zone is a good way to stretch the brain and let some creativity happen. Describing the box, drawing on other areas of experience, etc are key to pulling people out of their zone into a new area.
The trick is, of course, to pick the right box(es) to use. You want to stretch people enough and to stretch them in the right direction. Too far out of their range is as bad as having two many options. It would be like asking my grandmother to consider quantum mechanics…you would have gotten a blank stare and be “tsked” out of the room quickly. But asking people to imagine their parents as patients using a personal health record, is something that a developer could probably stretch into.