By Rosie Pendrous.
In psychological research, we rely on being able to measure a construct (such as depression) or a behaviour (such as frequency of past self-harm) in a valid and reliable way . In doing so, we need to balance developing measures that accurately tap into the construct or behaviour we intend to measure with not using up too much of our participant’s time in answering the questions. We often choose single-item measures over multi-item measures for this reason for this reason; they are easy and quick to answer and can have high face validity under the right circumstances. They are also prevalent in suicide and self-harm research when measuring key variables such as frequency in past suicide attempts and self-harm [e.g. 2]. Indeed, in suicide and self-harm research, it is imperative that we can accurately measure and predict suicide and self-harm, not least because we know that past suicidal behaviour is a strong predictor of future suicidal behaviour . However, single-item measures are also not without limitations , including in suicide and self-harm research [4, 5]. The aim of this blog is to briefly summarise, and make references to sources for, some of the problems and advantages associated with using single-item measures, further considering when they may be most appropriate in this area of research.
Why are single-items problematic?
Researchers have found that single-item assessments of suicidal behaviour may lead to misclassification of the behaviour . In this 2016 study by Hom et al., the researchers compared the results of measuring past suicide attempts using a single item, a multiple item questionnaire, and a face-to-face interview. All 100 participants recruited initially endorsed a past suicide attempt with the single-item (“Have you ever attempted suicide, where you attempted to kill yourself?” Yes/No), whereas only 67% of the participants endorsed a past suicide attempt with an open-ended multiple-item questionnaire (asking the same single-item question and additional questions such as “What was the method used for each attempt?”, “Did you require any medical treatment for these attempts?”), and only 60% endorsed a suicide attempt following the interview (asking additional questions related to timing, personal circumstances, suicidal thinking and intent to die). Indeed, the interview identified different criteria under which the suicide attempts were categorised; whether it was an aborted attempt or an interrupted attempt. Overall, this potentially suggests that this single item may inflate the degree to which classification of suicide attempts are made, a pattern of results also reflected elsewhere [e.g., 5].
Generally speaking, single-item measures can also have limited sensitivity and reliability. Using a single-item response to capture where someone lies on a construct may not be sensitive enough to clearly discriminate between participants in your sample. In other words, if your item, say, “How many times have you attempted suicide?”, is rated on a five-point scale, this restricts the range in which people can score and does not lend itself to reaching those who may have attempted suicide more times. In multiple-item measures, this is less of a problem, because 10 questions on 5-point Likert scale would give a possible range of 50, increasing your ability to tell participants apart on the scale; this is called ‘discriminant validity’. Therefore, single-item measures may require a larger sample size to ensure a broad range of scores. With a restricted range of scores, it is more difficult to tell which other variables co-vary along with it so that we can predict suicide attempts.
Reliability can refer in the degree to which a measure assesses the construct that is free from error and gives consistent results . Common assessments of internal reliability (e.g. Cronbach’s alpha or McDonald’s Omega), can only be used when a measure has more than one item. This means that with single-item measures, one cannot produce an internal consistency-based reliability estimate in this way. Reliability is also affected because “measurement error averages out when individual scores are summed to obtain a total score” [p. 67; 7], but this cannot be the case with a single item. That said, while measurement error can be increased by relying on single items, measurement error is arguably more prevalent for multiple-item scales than for measures of ‘concrete’ constructs [see below for a definition; 8].
So when might you want to use single-item measures
Researchers [9, 10] outline a number of reasons why single-item measures are appealing in some psychological research, that may extend to suicide and self-harm research, not least for pragmatic and for ethical reasons. For those taking part in research, single items are easier and quicker to answer. From a researcher’s perspective, items that are easier and quicker to answer in a study may reduce recruitment costs, response biases from monotonous responding to a longer measure on the same response options, missing data, and participant burden. Single items may also have possible psychometric advantages, such as reducing the risk of common method variance, where spurious correlations may become apparent when analysing the data due to the same response format of the scale rather than the content of the items, and because single items can have high face validity depending on the conceptual nature of construct you intend to measure.
As Fuchs and Diamantopoulos state [p. 203, 10], “the selection single- versus multiple-items depends to a great extent on the construct of interest. Particularly relevant in this respect is whether the focal construct is concrete or abstract.” ‘Concrete’ constructs, as opposed to ‘abstract’ constructs, are those which are considered to be conceptually homogenous where having more items would be redundant . A potentially useful example of a concrete construct may be uncomplex constructs such as sleep duration or frequency of self-harm, where frequency or duration can only be measured using one measurement dimension. On the other hand, ‘abstract’ constructs are those which are conceptually multidimensional in nature.
Summary of possible considerations for using single-item measures
It is clear that single-item measures have their place in psychological research, but also have some important issues and considerations. Next, I aim to offer some potential consideration for the issues touched on above and when one may want to use them:
- Consider the most appropriate definition of the construct to make sure your single item will have sufficient construct and face validity, or simply reword the single question to fit the definition.
- Consider how you plan to use this construct in your analysis and design. For example, if the variable is key to your hypothesis or your design is longitudinal (where the construct may be impacted by circumstantial changes) then you may wish to use a more in-depth and reliable multiple-item measure of the construct; if this construct is a moderator or control variable, they may be appropriate as single-items .
- How some have chosen a single-item from a multiple-item scale. If you decide to use a single-item of an abstract construct, because a single-item is most appropriate for your study, some researchers [e.g. 3] have chosen the item by choosing the item which loads highly from initial validation work (i.e. from a factor analysis) and which has the highest internal reliability However, it is worth bearing in mind here that doing so does not necessarily tell you why this item may tap into the construct measured more so than other items .
- To ensure that your item is sensitive enough to capture the range of scores in your sample on the construct it represents, you could consider using an open-ended response option (whereby participants enter their own value), or use more available options to respond to for single items (e.g. a visual analogue scale ranging from 0 – 100). In suicide and self-harm research, this could be most appropriate when asking about frequency of these behaviours.
 Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7(3), 309.
 Franklin, J. C., Ribeiro, J. D., Fox, K. R., Bentley, K. H., Kleiman, E. M., Huang, X., … & Nock, M. K. (2017). Risk factors for suicidal thoughts and behaviors: a meta-analysis of 50 years of research. Psychological Bulletin, 143(2), 187. http://dx.doi.org/10.1037/bul0000084
 Loo, R. (2002). A caveat on using single‐item versus multiple‐item scales. Journal of Managerial Psychology, 17(1), 68-75. https://doi.org/10.1108/02683940210415933
 Hom, M. A., Joiner Jr, T. E., & Bernert, R. A. (2016). Limitations of a single-item assessment of suicide attempt history: Implications for standardized suicide risk assessment. Psychological Assessment, 28(8), 1026. https://doi.org/10.1037/pas0000241
 Millner, A. J., Lee, M. D., & Nock, M. K. (2015). Single-item measurement of suicidal behaviors: Validity and consequences of misclassification. PloS One, 10(10), e0141606. https://doi.org/10.1371/journal.pone.0141606
 Peter, J. P. (1979). Reliability: A review of psychometric basics and recent marketing practices. Journal of Marketing Research, 16(1), 6-17. https://doi.org/10.1177/002224377901600102
 Nunnally, J. C. , & Bernstein, I. H. (1994) Psychometric theory. (3rd ed.) New York: McGraw-Hill.
 Cote, J. A., & Buckley, M. R. (1987). Estimating trait, method, and error variance: Generalizing across 70 construct validation studies. Journal of Marketing Research, 24(3), 315-318.
 Hoeppner, B. B., Kelly, J. F., Urbanoski, K. A., & Slaymaker, V. (2011). Comparative utility of a single-item versus multiple-item measure of self-efficacy in predicting relapse among young adults. Journal of Substance Abuse Treatment, 41(3), 305-312. https://doi.org/10.1016/j.jsat.2011.04.005
 Fuchs, C., & Diamantopoulos, A. (2009). Using single-item measures for construct measurement in management research: Conceptual issues and application guidelines. Die Betriebswirtschaft, 69(2), 195.
 Rossiter, J. R. (2002). The C-OAR-SE procedure for scale development in marketing. International Journal of Research in Marketing, 19(4), 305-335. https://doi.org/10.1016/S0167-8116(02)00097-6
Rosie Pendrous (@rosiependrous) is a PhD student, Research Assistant and a member of the Centre for Contextual Behavioural Science at the University of Chester (email@example.com).
*Featuring Photo by nick morrison on Unsplash.