top of page
Search
  • Writer's pictureLindaKKaye

Observations from a fatigued reviewer

Updated: Apr 10

Lire une traduction française (Read a French translation)


Like many academics and researchers, I receive tonnes of requests to review manuscripts from many different journals. Given my specialism in cyberpsychology, the majority of these, of course, align to topics in this area. Unfortunately, there is an overwhelming abundance of research which adopts a very narrow and pessimistic view of technologies, and thus a lot of the papers which I am asked to review follow a very similar format:


“Problematic [insert social media platform] use and [insert negative mental health variables]". In almost all cases there are the same types of problems with this research, so much so that I usually think I could simply just use the same reviewer comments every time I receive such a request.


I know I am not the only person fatigued by these issues and so I felt it was time to write a blog post which identifies what I see as being the most typical issues with these sorts of studies.

 

1. Poor conceptualisation of constructs

One of the prominent issues in research on the issue of “problematic” technology use, is that there is a lot of interchangeable use of terms. This is something we have written about previously in respect of “internet addiction” (Ryding & Kaye, 2018), but researchers even within the same manuscript have been known to interchangeably refer to “problematic use”, “addiction” “excessive use” and “pathological use” which arguably are different constructs. This really doesn’t help readers navigate this field and understand what the specific construct of interest is, and what it is contributing to the research field. As well as this, it also has implications to the measurements which are used, discussed further in point 2.

 

Validity of measures

2. Because the conceptualisation of the research constructs is often woolly, this often has a knock-on impact to the measures which are selected by researchers. It is very common to see researchers theorising that they are interested in “problematic use” but then use addiction scales. Surely it is a basic lesson in Psychology Research 101 that you need to make sure that you are actually correctly measuring the construct you are interested in? Esteemed colleagues have recently written about this issue and nicely highlighted the implications of this (Davidson et al., 2022).



3. A wider issue about the constructs in this research literature is that it is debateable whether or not they are actually meaningful and valid constructs in the first place, despite there being measurement tools which have been found to be psychometrically valid. We have written about this previously and demonstrated that it is easy enough to validate a scale which measures a totally nonsense construct such as “offline friend addiction” (Satchell et al., 2020). Interestingly around two-thirds of people show this unfortunate “addiction” using our satirical scale! So basically, just because you have a fancy measure, it doesn’t mean what you are measuring it a meaningful construct.


 

Exploring usage behaviour without measuring behaviour

4. Another wider issue in this field is that this is dealing with behavioural aspects of technology use, yet very little research actually takes behavioural measures. So, despite there being a wide literature which adopts the approach that social media use may be a behavioural addiction, very little of this research actually takes any measures of behaviour to verify that this is indeed the case. We have written about this and noted some opportunities which behavioural measures could bring to this field (Ellis et al., 2018).


 

Falsely claiming causality

5. In terms of methodological approach, the majority of this research is cross-sectional, despite many of these authors claiming causality of so-called “problematic use” on mental health variables. This is especially problematic when it is this type of evidence which is typically at the forefront of influencing policy and public debate on this issue. This seems to be largely influenced by a tendency to assume technological determinism that one factor causes another, and also without recognising the fact that these factors exist in a wider context. There has been a great paper published recently which speaks to this and demonstrates that when measuring social media use against a wider range of other factors on adolescent mental health variables, this is a somewhat miniscule effect (Panayiotou et al., 2023). Similarly, other research suggests that there are more significant risk factors on adolescent well-being than digital technology use (Orben & Przybylski,, 2019).

 

Inconsistency in scoring techniques.

6. This is specifically an issue when authors are determining cut-offs from their “problematic use” scales about what is deemed problematic and what is not. We have written about this recently and demonstrated with data about the implications of this (Connolly et al., 2021). To summarise here, there are a mixture of techniques available for this and these are often used interchangeably or applied incorrectly. A common technique seems to be that authors choose not to use cut-offs at all and instead use the total or mean score of a “problematic use” measure without any sub-sample analyses. This is concerning as oftentimes the actual average or median scores from these scales are relatively low and would indicate that generally participants are not endorsing or “agreeing” to statements about problematic aspects of use. Yet, these authors are still making inferences about how “problematic use” relates to X, Y and Z which brings about a conceptual issue.


Other scoring practices use a ‘polythetic’ classifying criteria whereby “problematic” use is determined based on an individual responding ‘neither agree nor disagree’ (usually 3 or more on a 5-point scale) on more than half of the items in a given questionnaire (i.e. on a 10 item questionnaire rated on a 5-point scale, they would need to respond “3” on at least six of the items). Alternatively, a more conservative approach is “monothetic scoring”. This classifies a person who scored at the mid-point or above (e.g., 3 or above on a 5-point scale) on all items of a scale. However the problem persists in that there is little consistency in how researchers in this field use scoring techniques and even when cut-offs are determined, no sub-sample analyses between “problematic users” and “non-problematic users” is undertaken which seems somewhat strange.

 

Lack of theoretical insight

7. Research in this field doesn’t often offer much insight beyond use per se to theorise why social media use, for example, might relate in one way or other to mental health outcomes. Measurements of use or “problematic use” often are vague or ill-defined and don’t offer insight into what, how or why people may be engaging in this given technology, and thus why, theoretically it should be expected to relate in any way to mental health variables. I’ve written about this recently in a wider reflection about measurement in social media research (Kaye, 2021).

 

Narrow and biased literature review

8. Finally, a common issue I see is that concepts such as “social media addiction” or similar are introduced by authors in their manuscript as if they are an established and agreed construct. This is far from the case and it is often disappointed to see that authors don’t even make reference to the highly debated and volatile field that their research is situated in. As a reviewer, to me this comes across at worst as being highly biased and agenda-driven and at best, makes me question the authors’ ability to critically appraise how their work fits into the existing literature.


I hope this blog post might resonate with those in the field and related areas and motivate us to drive better quality research to study these fascinating but societally-important issues.


 

References

Connolly, T., Atherton, G., Cross, L., & Kaye, L. K. (2021). The Wild West of measurement: Exploring problematic technology use cut off scores and their relation to psychosocial and behavioural outcomes in adolescence. Computers in Human Behavior, 125, e106965. https://doi.org/10.1016/j.chb.2021.106965


Ellis, D. A., Kaye, L. K., Wilcockson, T. D. W., & Ryding, F. C. (2018). Digital Traces of behaviour within addiction: Response to Griffiths (2017). International Journal of Mental Health and Addiction 16 (1), 240-245


Davidson, B. I., Shaw, H., & Ellis, D.A. (2022). Fuzzy Constructs in Technology Usage Scales. Computers in Human Behavior, 133, 107206. https://doi.org/10.1016/j.chb.2022.107206


Kaye, L. K. (2021). Exploring “socialness” in social media. Computers in Human Behavior Reports, 3. 100083. https://doi.org/10.1016/j.chbr.2021.100083


Orben, A., & Przybylski, A. K. (2019). The association between adolescent well-being and digital technology use. Nature Human Behaviour, 3, 173-182


Panayiotou, M., Black, L., Carmichael-Murphy, P., Qualter, P., & Humphrey, N. (2023). Time spent on social media among the least influential factors in adolescent mental health: preliminary results from a panel network analysis. Nature Mental Health 1, 316–326. https://doi.org/10.1038/s44220-023-00063-7


Ryding, F. C., & Kaye, L. K (2018). “Internet Addiction”: A conceptual minefield. International Journal of Mental Health and Addiction 16 (1), 225-232. doi: 10.1007/s11469-017-9811-6


Satchell, L., Fido, D., Harper, C., Shaw, H., Davidson, B. I., Ellis, D. A., Hart, C. M., Jalil, R., Jones, A., Kaye, L. K., Lancaster, G., & Pavetich, M. (2021). Development of an Offline-Friend Addiction Questionnaire (O-FAQ): Are most people really social addicts? Behavior Research Methods, 53, 1097–1106. https://doi.org/10.3758/s13428-020-01462-9


436 views1 comment

Recent Posts

See All

Getting the most out of personal statements

Through my experiences of supporting students with personal statements, mentoring colleagues on career progression applications and intercepting job applications, I’ve seen a wide array of ways that p

bottom of page