By Wouter van Ballegooijen
Everyone has a smartphone. Well no, but 80% of people do (in the UK and the Netherlands) and it’s safe to say that your average patient in adult and adolescent mental health care will have one. I recently calculated that each of my two-year-old smartphone’s 8 processor cores have 15 times the clock speed of my first Windows PC’s processor in the 90s. I mean to say that we carry very powerful computers in our purses and pockets. And we can use them in research.
Obtrusive or unobtrusive
The good thing about smartphones is that people carry them around all day. Smartphones can also make sounds, so you can bother your research participants all day with notifications, hoping they respond to a few questions about how they feel at that moment. The great advantage of this way of mobile measuring is that you track people’s mental health as they go about their daily business, i.e. their ‘natural’ environment. That’s why it’s called ecological momentary assessment (EMA) or experience sampling methods (ESM). There is also no recall bias.
This may sound like you’d put too much of a burden on a vulnerable population, but I don’t think so. Tracking your own behaviour with Fitbits is widespread. Period tracking apps are very popular and many of them include mood ratings. EMA is nothing new in that sense. Patients in mental health care, the ones in my study at least, see the merits of keeping track of their mood, suicide ideation and other symptoms. The app’s graphs show them it is not always going bad, and also show them exactly when their symptoms get worse. I have only a handful of patients included yet, but those who are say it’s helpful.
Instead of bothering your participants with notifications, you could also just ask them to install an app that does all the measurement for them. Smartphones can keep track of a wide range of variables, like movement, location, temperature, brightness, how hard you push the screen and app usage. There are also studies on voice tone and text analysis of messages. All this automated data gathering is called unobtrusive EMA by some colleagues of mine, because participants don’t have to do anything. A notification would be obtrusive. I don’t fully agree, because automated measurement of all types of behaviour doesn’t sound so unobtrusive to me, while a notification can be ignored. Let’s call it active (i.e. requiring user input) and passive EMA. Anyway, this is an exciting field of research, although I still have to see exactly how a person’s movement, location etc. can be used in mental health care.
Let’s take a look at active, notification-based EMA. What can we do with that in research? We can look at psychological processes, and we can do that in two ways. One way is to analyse things like emotional stability, variability and inertia, that is, the development of a single variable over time. Changes in stability and inertia of emotions can tell us when a mental state (e.g. feeling OK) is moving to another state (e.g. feeling depressed). The second is to analyse the dynamics between variables. For example, an increase in suicide ideation might be preceded by an increase in rumination a few hours earlier. If we figure out a model that applies to most (or subgroups of) people, we can use this to forecast an increase in suicidal ideation, or suicidal behaviour. And if we can do that, we can intervene (e.g. a safety-planning app popping up) in an attempt to stop the process before it gets out of hand. This is, in my opinion, an important and motivating goal, but there’s a long way to go.
Another good use of smartphone-based EMA in suicide research would be for studying treatment effects. Set up your standard randomised controlled trial, use retrospective questionnaires for only some baseline characteristics, and an EMA app measures the outcome variables. There are two very good reasons to do that. One is that you can observe when the treatment effect kicks in, i.e. after which session does symptom severity start to decrease. And imagine you have multiple groups, each receiving the therapy elements in a different order, you can see which element is actually helping (which) people. The other reason is statistical power. Isn’t it annoying that according to Cohen you would need 200 patients to prove a meagre between-groups effect of d = .50? Most RCTs are underpowered, perhaps because it is hardly feasible to include 200 patients within the timespan of one PhD trajectory. But you may have heard that the more observations you have of a variable per participant, the more statistical power you have. EMA studies may get you 50-100 observations per participant. A quick power analysis tells me 26 participants would be sufficient for proving a small effect if you have 50 observations per participant. Of course, there are a lot of issues with such a power analysis, but you see the point that loads of observations will lead to more powerful and also more reliable results.
The (near) future
More and more apps are being developed and tested for treatment of suicide ideation, such as self-help apps or add-ons to face-to-face therapy. Now imagine a future where these apps become responsive to EMA input, giving patients the right content in the right situation. As said above, actually preventing suicide attempts like this sounds very far away. Then again, in the past few years, EMA methods as well as analytic techniques have been rapidly developing and who knows what will happen in the next few years.