Critiquing an article is fundamental to research utilization and evidence based practice. The process of research critique is an intellectual activity which will help one decide as to what extent research maybe useful in practice; to see if the findings are trustworthy, and be able to compare it with other related research. While the term “research” has been used rather freely in the past, there has also been a tendency to perceive research as an end in itself rather than as a means to an end, namely improvement in the quality of care provided to patients.
As LoBiondo-Wood et al. (2002) mention, “the meaning of quality research should contribute to knowledge relevant to care and service. Further, research should provide a specialized scientific knowledge base that empowers a profession to anticipate and meet these challenges and maintain its societal relevance”. The internet once again has proven that it is the most powerful tool to disseminate informations worldwide.
It has been used for health and medical informations with over millions of internet users nowadays as it has the capacity to disseminate psychoeducation especially to those who may have not seek formal treatment for mental and health services for a minimal cost. Moreover, it has the capacity for online interventions to a large audience and over a third says that their health has improved. Further, its programs can be modified to suit the needs of its users as they can be alerted anytime to change and track updates or for follow-up of their case encompassing geological barriers.
This paper will critique the seminar on “Delivering internet interventions for depression: Free range users and one hit wonders”by Helen Christensen, Kathy Griffiths, Chloe Groves, Ailsa Korten, based on the journal article “Free range users and one hit wonders: community users of an Internet-based cognitive behavior therapy program published in the Australian and New Zealand Journal of Psychiatry 2006; 40:59–62. Research Purpose and Aim
The research problem is a “situation in need of a solution, improvement or alteration or a discrepancy in the way things are or the way they ought to be” (Burns & Grove, 1993). The article, ““Free range users and one hit wonders: community users of an Internet-based cognitive behavior therapy program (2006) is a study to evaluate the predictors of symptoms change or the methods that might increase user ‘compliance’ on websites designed to improve mental health outcomes (p.
59). The aims of the study are twofold: first is to examine predictors of expected final depression and anxiety scores as a function of characteristics such as gender, number of modules completed, and initial anxiety and depression scores of users; and second is to compare user characteristics and outcomes from the original MoodGYM site (Mark I) with those of public registrants of the new public version of the site (MoodGYM Mark II).
For this second aim of the paper, comparison is made for gender, initial depression and anxiety scores, and completion rates for the two site versions to examine whether structural changes to the site resulted in different user (p. 61). This study is important for online users as it hope to show that shorter interventions lead to similar health outcomes and that even brief burst of information lead to increased help seeking. In addition, this study is important for online healthcare providers like MoodGYM to find out if website adherence or “stickiness” will be helped resolved or cease to be an issue (p.
62). Hypothesis and Research Questions In a research study, the researcher must formulate as many hypotheses as needed to address all aspects of the research problem. Research hypothesis directs the research study, unifies theory and reality and helps extend the knowledge base. It is a statement about the relationship between 2 or more variables; it provides direction for gathering and interpreting data and identifies the population to be studied.
Wood & Haber (1998) pointed out that hypotheses are never proven; they are accepted or rejected, or supported or not supported. Christensen et al. used a directional hypothesis as they specified the expected direction of the relationship between the independent and dependent variables where the dependent variable was the final score and independent variables were gender, number of modules completed (treated as three dummy variables), initial depression score and a quadratic function of the initial score.
Hence, the following research hypotheses were tested: a) that shorter internet interventions are associated with decreased depression symptoms, b) that even brief burst of information lead to increased helpseeking (one hit wonders) and c) that much better outcome is expected if users could be retained on the site for longer periods of time. (p. 60).
This research is a follow-up study since the previous research have shown that the interactive program called the MoodGym has delivered cognitive behavior therapy (CBT) as effective as those compared to an attention placebo condition in reducing depression and anxiety symptoms ( Christensen & Jorm, 2004). The previous study have subsequently shown that outcomes for spontaneous users of the site are of the same magnitude as those of trial participants enrolled in our randomized controlled trial (Christensen, et. al. , 2004).
Hence the research questions arising for this study could be: 1) Can shorter internet interventions results to the same decreased depression symptoms as that of the RCT? 2) Can brief burst of informations increased help seeking and do “one hit wonders”? and 3) How could users be retained on the site for longer periods of time? Methods of the Study The online survey is the main design of the study as it is internet based of course. The sample population consisted of 19 607 online visitors as ‘free range users’ who registered on the site between April 2001 and September 2003.
The control group is the 182 participants in the MoodGYM condition of the BlueMood trial. To assess the symptoms of depression of the two groups, the Goldberg Depression and Anxiety Scales (Goldberg et. al. , 1988) were repeated within the website intervention to allow the examination of change in symptoms across modules. The outcome variables used were gender, initial depression severity scores, number of assessments attempted (maximum number=5) and symptom levels following intervention. To determine if the results are statistically significant, Christensen et al.
(2005) used various statistical analyses for this study. The linear regression analyses were used to develop predictors of anxiety and depression final scores. Chi-squared or t-tests were used to find out the differences between the two versions of the site. For the comparison between Mark I and Mark II versions of the site, the researchers compared the 19 607 visitors to the original site, with 38 791 users who registered on the Mark II version of the site between September 2003 and October, 2004. Analysis of the Results
Analysis of the predictors of final anxiety and depression scores for Moodgym (Mark I) revealed that there were no differences in outcomes between the randomized clinical trial (RCT) participants and those accessing the original MoodGYM website (p. 60). The linear regression results where the dependent variable was the final score and independent variables were gender, number of modules completed (treated as three dummy variables), and initial depression score and a quadratic function of the initial score, all independent variables and the interaction between initial depression score and number of modules were significant.
The results showed that men are predicted to be 0. 19 units (SE=0. 095) higher than women on depression, controlling for the initial depression level and number of modules completed. For initial depression scores above 2, it is predicted that the final score will indicate improvement relative to the initial score, the magnitude of the improvement increasing as a function of the number of modules attempted. For initial anxiety scores above 2, it is predicted
that the final score will indicate improvement relative to the initial score, the magnitude of the improvement increasing as a function of the number of modules attempted. Mark II registrants were more likely than to Mark I registrants to complete onsite assessments (p. 59). Thus Christensen, et. al. (2005) has answered the objectives of the study. For the first objective, they were able to establish that the predictors of expected final depression are gender; number of modules completed, and initial depression scores of users.
On the other hand, the expected final anxiety predictors are the same with that of depression except gender. The second aim is to compare user characteristics and outcomes from the original MoodGYM site (Mark I) with those of public registrants of the new public version of the site (MoodGYM Mark). Researchers have concluded from the results that ‘free range users’ of the online version of MoodGYM Mark I are more likely to have lower depression at the end of the intervention if they are women, have lower initial scores, and complete more module assessments.
These dose–response relationships may illustrate the importance of a user’s adherence to the site for positive outcomes, although they may also be due to the retention of users who make the biggest gains early. The completion data from MoodGYM indicate that adherence to the full program is poor, with less than 7% of the site users progressing beyond the first two modules in the Mark II site. The remaining proportion of users, the ‘one hit wonders’, drop out early.
The addition of compulsory components appears to increase compliance for a second assessment, but does not increase persistence, with approximately the same proportion of users from both sites discontinuing at the same rate at subsequent assessment occasions. The Mark II structure is also associated with a reduced proportion of female users and a (statistically significant) increase in registrants with higher levels of depression (p. 62). Limitations Christensen et al. recognized that their study has several limitations.
First, the usefulness of making direct comparisons of the outcome level and attrition rates of Internet interventions and clinical trials is in question because of the difference in patterns of attrition and the missing data will reflect the tied sample characteristics (motivation, symptom severity and expectations of participants). In addition, the Internet sites create the opportunity to either ‘opt in’ or ‘opt out’ of ‘treatment’ easily, making them likely to ‘enroll’ diverse individuals
with low levels of commitment and little expectation of being ‘helped’. Also, clinical trials provide infrastructure and positive expectations. Hence these differences question the usefulness of directly comparing rates of adherence or compliance across the two types of interventions. Moreover, selective attrition is difficult to interpret in both clinical and Internet trials, because ‘dropout’ or non-adherence may arise for different reasons and be associated with different outcomes for different individuals.
For example, Internet users may ‘drop out’ either because they are dissatisfied with the intervention (real ‘dropouts’) or because the intervention has met their needs (these individuals are labeled ‘attainers’ in e-education environments (Martinez, 2003). Future Research Studies The authors recommended that future studies are needed to identify the proportions of these different classes of dropouts as it will require different analysis models to those traditionally used, including the development of new models which take into account individual trajectories of change based on sample characteristics.
Hence, there should be the creation of new services through the development of virtual clinics with a consumer focus including the use a ‘new class of worker’ to implement evidence based applications. Another future research should show that shorter interventions lead to similar health outcomes and that even brief burst of information lead to increased help seeking. Lastly, a future research is needed to end the issue on website adherence or ‘stickiness’ for online sites like MoodGYM (p. 62) and to reward models of service that deliver evidence based treatments – through Medicare rebates- or other funding mechanisms.
Conclusion The critique of the seminar on “Delivering internet interventions for depression: Free range users and one hit wonders”by Helen Christensen, Kathy Griffiths, Chloe Groves, Ailsa Korten, based on the journal article “Free range users and one hit wonders: community users of an Internet-based cognitive behavior therapy (2005) have recognized that the Internet has the capacity to reach many individuals who are seeking for formal treatment for mental health services and that the Internet has a role in disease prevention even in the delivery of short positive health messages.
Patient visitors to any internet-based cognitive therapy program such as the MoodGYM site are likely to have better psychological outcomes if they complete more of the site material and comply with the necessary core sections will increases assessment completion and thus treatment. References Burns, N. & Grove, S. K. (1993). The practice of psychological research: conduct, critique and utilization. 4th edition. Philadelphia : W. B. Saunders. Christensen H, Griffiths KM, Jorm AF (2004).
Delivering interventions for depression by using the Internet: randomized controlled trial. British Medical Journal, 328:265. Christensen H, Griffiths KM, Korten AE, Brittliffe K, Groves C. (2004). A comparison of changes in anxiety and depression symptoms of spontaneous users and trial participants of a cognitive behavior therapy website. Journal of Medical Internet Research, 6:e46. Christensen, H. ,Griffiths K. , Groves C. & Korten, A. (2006). Free range users and one hit
wonders: community users of an Internet-based cognitive behavior therapy program. Australian and New Zealand Journal of Psychiatry; 40:59–62. Goldberg D, Bridges K, Duncan-Jones P, Grayson D. (1988). Detecting anxiety and depression in general medical settings. British Medical Journal, 297:897–899. Martinez M. (2003). High attrition rates in e-learning: challenges, predictors, and solutions. The E- Learning Developers’ Journal.. Ingram, Richard (2002). An introduction to critiquing research papers, with resources for further
study. Available: http://www. richard. ingram. nhspeople. net/student/critintro. htm LoBiondo-Wood, G. , & Haber, J. (1998) Research: methods, critical appraisal, and utilization. (5th edition). St. Louis : Mosby LoBiondo-Wood, G. , Haber, J. & Krainovich-Miller, B. (2002). Critical Reading Strategies: Overview of the Research Process. Chapter 2 In LoBiondo-Wood, G. & Haber, J. (editors). Research: Methods, critical appraisal, and utilization. (5th Edition). St Louis: Mosby.