Extraction form for project: The effectiveness of brief Interventions for preventative health behaviours within the Third and Social Economy sector

Design Details

1. What is the report title?
2. Study funding source
3. Possible conflicts of interest
4. Is the study eligible based on study type?
Quantitative: randomised or non-randomised control trial, controlled before/after, interrupted time series or repeated measures study of least 3 timepoints before and after the intervention, pilot trials, other
5. Is the study eligible based on participants?
Over 18 (or overlapping age range e.g 15+)
6. Is the study eligible in terms of intervention?
Fulfils all of the following: • Sessions under 30 minutes • One-to one • Discuss at least one of the following; housing, finance, physical activity, diet, smoking or alcohol. • Motivational interviewing, goal setting, action and coping planning (within the timescale) EITHER: 1) Delivered by someone within the TSE sector that is not from an educational setting e.g girl guides, charity volunteers, food bank operatives 2) Applied within a TSE setting that is not educational e.g food bank, soup kitchen, local knitting group, coffee morning If delivered by volunteer from TSE setting: by any medium e.g telephone or video call, in-person If not delivered by volunteer from TSE setting: first session must be in-person, remaining sessions can be any medium e.g telephone or video call
7. Is the study eligible in terms of outcome?
Any of the following: • Self-report e.g food or exercise diary, smoking cessation • Objective measure of behaviour e.g accelerometer, exhaled carbon monoxide, observations, bank statement, employment status • Biological outcome of behaviour e.g glycaemic control, blood pressure, cholesterol
8. Population description
e.g low SES men, homeless adults. Include comparative information for each group (i.e. intervention and controls) if available
9. Setting
Relating to TSE or not. Include comparative information for each group (i.e. intervention and controls) if available
10. Inclusion criteria
Include comparative information for each group (i.e. intervention and controls) if available.
11. Exclusion criteria
Include comparative information for each group (i.e. intervention and controls) if available.
12. Sampling method (e.g purposive or opportunity)
Include comparative information for each group (i.e. intervention and controls) if available.
13. Method/s of recruitment of participants (e.g posters, referral). Include comparative information for each group (i.e. intervention and controls) if available.
14. Aim of study
Descriptions as stated in report/paper
15. Duration of participation
From recruitment to last follow-up. Descriptions as stated in report/paper.

Arms

Arm NameArm Description
Brief MI(motivational interviewing)
NRT sampling (nicotine replacement therapy)
referral-only (control group)

Arm Details

1. Number assigned to group
Specify whether no. people or clusters. Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
2. Content
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
3. Duration of sessions
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
4. Frequency and number of sessions
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
5. Providers
e.g. no., profession, training. Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
6. Co-interventions (if applicable)
Description as stated in report/paper
Brief MI
NRT sampling
referral-only

Sample Characteristics

1. Sample size at the beginning of the study
Description as stated in report/paper
Brief MI
NRT sampling
referral-only
Total
2. Clusters
If applicable, no., type, no. people per cluster. Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
Total
3. Baseline imbalances between groups (if applicable)
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
Total
4. Number of withdrawn / excluded participants
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
Total
5. Age (range and mean)
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
Total
6. Sex (percentages)
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
Total
7. Ethnicity (percentages)
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
Total
8. Other treatment received (additional to study intervention)
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
Total
9. Other relevant sociodemographics
Description as stated in report/paper.
Brief MI
NRT sampling
referral-only
Total

Outcomes

TypeDomainSpecific measurement (i.e., tool/definition/specific outcome)PopulationsTimepoints
ContinuousReduction in cigarette consumptioncigarettes smoked per day
  • All Participants
  • Baseline
  • 1 month
CategoricalQuit attemptmade between baseline and follow-up
  • All Participants
  • 1 month

Outcome Details

1. Timepoints measured
Specify whether from start or end of intervention. Description as stated in report/paper.
Reduction in cigarette consumption
Quit attempt
2. Outcome definition
Note whether the outcome is desirable or undesirable if this is not obvious. Description as stated in report/paper
Reduction in cigarette consumption
Quit attempt
3. Person measuring / reporting
Description as stated in report/paper
Reduction in cigarette consumption
Quit attempt
4. Unit of measurement (if relevant)
Description as stated in report/paper.
Reduction in cigarette consumption
Quit attempt

Risk of Bias Assessment

1. Select the study type
2. 1.1 Was the allocation sequence random?
Rating
Notes/Comments:
3. 1.2 Was the allocation sequence concealed until participants were enrolled and assigned to interventions?
Rating
Notes/Comments:
4. 1.3 Did baseline differences between intervention groups suggest a problem with the randomization process?
Rating
Notes/Comments:
5. Risk-of-bias judgement
Rating
Notes/Comments:
6. Optional: What is the predicted direction of bias arising from the randomization process?
Rating
Notes/Comments:
7. 2.1. (effect of assignment to intervention) Were participants aware of their assigned intervention during the trial?
Rating
Notes/Comments:
8. 2.2. (effect of assignment to intervention) Were carers and people delivering the interventions aware of participants' assigned intervention during the trial?
Rating
Notes/Comments:
9. 2.3. (effect of assignment to intervention) If Y/PY/NI to 2.1 or 2.2: Were there deviations from the intended intervention that arose because of the experimental context?
Rating
Notes/Comments:
10. 2.4. (effect of assignment to intervention) If Y/PY to 2.3: Were these deviations from intended intervention balanced between groups?
Rating
Notes/Comments:
11. 2.5. (effect of assignment to intervention) If N/PN/NI to 2.4: Were these deviations likely to have affected the outcome?
Rating
Notes/Comments:
12. 2.6. (effect of assignment to intervention) Was an appropriate analysis used to estimate the effect of assignment to intervention?
Rating
Notes/Comments:
13. 2.7. (effect of assignment to intervention) If N/PN/NI to 2.6: Was there potential for a substantial impact (on the result) of the failure to analyse participants in the group to which they were randomized?
Rating
Notes/Comments:
14. Risk-of-bias judgement
Rating
Notes/Comments:
15. Optional: (effect of assignment to intervention) What is the predicted direction of bias due to deviations from intended interventions?
Rating
Notes/Comments:
16. 2.1. (effect of adhering to intervention) Were participants aware of their assigned intervention during the trial?
Rating
Notes/Comments:
17. 2.2. (effect of adhering to intervention) Were carers and people delivering the interventions aware of participants' assigned intervention during the trial?
Rating
Notes/Comments:
18. 2.3. (effect of adhering to intervention) If Y/PY/NI to 2.1 or 2.2: Were important co-interventions balanced across intervention groups?
Rating
Notes/Comments:
19. 2.4. (effect of adhering to intervention) Were there failures in implementing the intervention that could have affected the outcome?
Rating
Notes/Comments:
20. 2.5. (effect of adhering to intervention) Was there non-adherence to the assigned intervention regimen that could have affected participants’ outcomes?
Rating
Notes/Comments:
21. 2.6. (effect of adhering to intervention) If N/PN/NI to 2.3 or 2.5 or Y/PY/NI to 2.4: Was an appropriate analysis used to estimate the effect of adhering to the intervention?
Rating
Notes/Comments:
22. Risk-of-bias judgement
Rating
Notes/Comments:
23. 3.1. Were data for this outcome available for all, or nearly all, participants randomized?
Rating
Notes/Comments:
24. 3.2. If N/PN/NI to 3.1: Is there evidence that the result was not biased by missing outcome data?
Rating
Notes/Comments:
25. 3.3. If N/PN to 3.2: Could missingness in the outcome depend on its true value?
Rating
Notes/Comments:
26. 3.4. If Y/PY/NI to 3.3: Is it likely that missingness in the outcome depended on its true value?
Rating
Notes/Comments:
27. Risk-of-bias judgement
Rating
Notes/Comments:
28. Optional: What is the predicted direction of bias due to missing outcome data?
Rating
Notes/Comments:
29. 4.1. Was the method of measuring the outcome inappropriate?
Rating
Notes/Comments:
30. 4.2. Could measurement or ascertainment of the outcome have differed between intervention groups?
Rating
Notes/Comments:
31. 4.3. If N/PN/NI to 4.1 and 4.2: Were outcome assessors aware of the intervention received by study participants?
Rating
Notes/Comments:
32. 4.4. If Y/PY/NI to 4.3: Could assessment of the outcome have been influenced by knowledge of intervention received?
Rating
Notes/Comments:
33. 4.5. If Y/PY/NI to 4.4: Is it likely that assessment of the outcome was influenced by knowledge of intervention received?
Rating
Notes/Comments:
34. Risk-of-bias judgement
Rating
Notes/Comments:
35. Optional: What is the predicted direction of bias in measurement of the outcome?
Rating
Notes/Comments:
36. 5.1. Were the data that produced this result analysed in accordance with a pre-specified analysis plan that was finalized before unblinded outcome data were available for analysis?
Rating
Notes/Comments:
37. 5.2. Is the numerical result being assessed likely to have been selected, on the basis of the results, from multiple outcome measurements (e.g. scales, definitions, time points) within the outcome domain?
Rating
Notes/Comments:
38. 5.3. Is the numerical result being assessed likely to have been selected, on the basis of the results, from multiple analyses of the data
Rating
Notes/Comments:
39. Risk-of-bias judgement
Rating
Notes/Comments:
40. Optional: What is the predicted direction of bias due to selection of the reported result?
Rating
Notes/Comments:
41. Overall Risk-of-bias judgement
Rating
Notes/Comments:
42. Optional: What is the predicted direction of bias due to selection of the reported result?
Rating
Notes/Comments:
43. 1a.1 Was the allocation sequence random?
Yes, if a random component was used in the sequence generation process such as using a computer generated random numbers, referring to a random number table, minimization, coin tossing; shuffling cards or envelopes; throwing dice; or drawing of lots. Minimization may be implemented without a random element, and this is considered to be equivalent to being random. No, if the sequence is non-random, such that it is either likely to introduce confounding, or is predictable or difficult to conceal, e.g. alternation, methods based on dates (of birth or admission) or patient record numbers, allocation decision made by clinicians or participants, based on the availability of the intervention, or any other systematic or haphazard method. If the only information about randomization methods is to state that the study is randomized, then this signalling question should generally be answered as “No information”. There may be situations in which a judgement is made to answer Probably No or Probably Yes. For example, if the study was large, conducted by an independent trials unit or carried out for regulatory purposes, then it may be reasonable to assume that the sequence was random. Alternatively, if other (contemporary) trials by the same investigator team have clearly used non-random sequences, it might be reasonable to assume that the current study was done using similar methods. Similarly, if participants and personnel are all unaware of intervention assignments throughout/during the trial (blinding or masking), this may be an indicator that the allocation process was also concealed, but this will not necessarily always be the case. If the allocation sequence was clearly concealed but there is no information about how the sequence was generated, it will often be reasonable to assume that the sequence was random (although this will not necessarily always be the case).
Rating
Notes/Comments:
44. 1a.2 Is it likely that the allocation sequence was subverted?
Processes of randomizing clusters vary. It is important first to consider carefully whether there are any ways in which the allocation could potentially have been subverted (deliberately tampered with so that clusters end up in a group they were not supposed to be randomized to if the randomization was conducted properly). This will usually include a consideration of whether any individuals were aware of any potential allocations prior to those allocations being made. However, although subversion may be possible, it is often the case that in cluster randomized trials those who could subvert the randomization have less motivation and/or knowledge to do so (see text for further explanation), so a judgement must be made as to whether this is likely.
Rating
Notes/Comments:
45. 1a.3 Were there baseline imbalances that suggest a problem with the randomization process?
Imbalances in numbers of clusters or stratification factors or other cluster characteristics are usually the best evidence of problems with the randomization process, but such problems are relatively unusual as explained in 1a.2. On the other hand, due to the small numbers of clusters randomized in most cluster randomized trials, chance imbalances in either cluster or participant characteristics are more common than in individually-randomized trials and can sometimes appear substantial. As for the tool for individually-randomized trials, chance imbalances should not be highlighted here, and neither should imbalances that are due to identification/recruitment bias (which are assessed in Domain 1b). Answer No if no imbalances are apparent or if any observed imbalances are compatible with chance Answer Yes only if there is clear evidence of imbalances that appear to be due to problems with randomization. In some circumstances, it may be reasonable to answer Yes/Probably yes (rather than, No information) when there is a surprising lack of information on baseline characteristics when such information could reasonably be expected to be available/reported. If there is no information about cluster characteristics record, No information. The answer to this question should not be used to influence answers to questions 1a.1 or 1a.2. For example, if the trial has large baseline imbalances, but authors report adequate randomization methods, questions 1a.1 and 1a.2 should still be answered on the basis of the reported adequate methods, and any concerns about the imbalance should be raised in the answer to the question 1a.3 and reflected in the domain-level risk of bias judgement).
Rating
Notes/Comments:
46. Risk-of-bias judgement
See Figure 1. Suggested Algorithm for reaching risk of bias judgments for bias arising from the randomization process, link: https://www.riskofbias.info/welcome/rob-2-0-tool/archive-rob-2-0-cluster-randomized-trials-2016
Rating
Notes/Comments:
47. Optional: What is the predicted direction of bias arising from the randomization process?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
48. 1b.1 Were all the individual participants identified before randomization of clusters (and if the trial specifically recruited patients were they all recruited before randomization of clusters)?
Answer Yes if participants were identified and recruited prior to the clusters being randomized or if individual participants were not recruited at all but were identified prior to randomization. In these cases identification/recruitment bias is not possible. Answer No if either identification or recruitment of participants (or both) takes place after randomization. Also answer No if some participants are identified and/or recruited before and some after randomization as the potential for bias still exists in these trials.
Rating
Notes/Comments:
49. 1b.2 If N/PN/NI to 1b.1: Is it likely that selection of individual participants was affected by knowledge of the intervention?
Answer Yes if those recruiting individuals are aware of cluster allocation prior to recruitment and are likely to consciously or subconsciously have differentially recruited in the trial arms; if some of those being recruited are aware of cluster allocation prior to their own recruitment and this is likely to have differentially affected recruitment in the trial arms; if those identifying potential participants (when recruitment is to take place subsequently) or those identifying actual participants (when there is no subsequent recruitment) are aware of cluster allocation and are likely to have consciously or subconsciously differentially include potential individual participants in different trial arms. Answer No if all of the following (as relevant depending on the trial) are unaware of cluster allocation at recruitment: (1) those identifying actual participants, (2) those identifying potential participants, (3) those recruiting and (4) potential participants themselves.
Rating
Notes/Comments:
50. 1b.3 Were there baseline imbalances that suggest differential identification or recruitment of individual participants between arms?
As for signalling question 1a.3, imbalances that are compatible with chance should not be highlighted here. Imbalances due to differential identification or recruitment of participants are more common in cluster randomized trials than imbalances due to problems with randomization. Such imbalances are usually in the numbers of participants recruited into each arm or, less commonly, in the characteristics of such individuals. If there is a noticeable imbalance and imbalance due to the randomization process and due to identification/recruitment of individuals are both possible a judgement will need to be made about which is the most likely cause of any imbalance or whether they are both likely.
Rating
Notes/Comments:
51. Risk-of-bias judgement
See Figure 2. Suggested Algorithm for reaching risk of bias judgments for bias arising from the timing of identification and recruitment of participants in a cluster-randomized trial, link: https://www.riskofbias.info/welcome/rob-2-0-tool/archive-rob-2-0-cluster-randomized-trials-2016
Rating
Notes/Comments:
52. Optional: What is the predicted direction of bias arising from the randomization process?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
53. 2.1a Were participants aware that they were in a trial?
In cluster randomized trials it is possible for participants to know they are receiving an intervention or that they are in a study but not that they are in a trial. Thus they may not know that other evaluations are being evaluated or what these interventions are. This makes it impossible for them to cause deviations from the intended interventions beyond what would be expected in usual practice.
Rating
Notes/Comments:
54. 2.1b If Y/PY/NI to 2.1a: Were participants aware of their assigned intervention during the trial?
Cluster randomized trials frequently involve multifaceted interventions. Answer Yes if participants were aware of any part of the allocated intervention during the trial.
Rating
Notes/Comments:
55. 2.2 Were carers and trial personnel aware of participants' assigned intervention during the trial?
If those involved in caring for participants or making decisions about their health care are aware of the assigned intervention, then implementation of the intended intervention, or administration of additional co-interventions, may differ between the assigned intervention groups. Masking carers and trial personnel, which is most commonly achieved through use of a placebo, may prevent such differences.
Rating
Notes/Comments:
56. 2.3 If Y/PY/NI to 2.1 or 2.2: Were there deviations from the intended intervention beyond what would be expected in usual practice?
When interest focusses on the effect of assignment to intervention, it is important to distinguish between: (a) deviations that happen in usual practice following the intervention and so are part of the intended intervention (for example, cessation of an exercise programme for health related issues); and (b) deviations from intended intervention that arise due to expectations of a difference between intervention and comparator (for example because participants feel ‘unlucky’ to have been assigned to the comparator group and therefore seek the active intervention, or components of it, or other interventions). We use the term “usual practice” to refer to the usual course of events in a non-trial context. Because deviations that arise due to expectations of a difference between intervention and comparator are not part of usual practice, they may lead to biased effect estimates that do not reflect what would happen to participants assigned to the interventions in practice. Deviations from the intended intervention that arise due to expectations of a difference between intervention and comparator are rarely reported in cluster randomized trials and may, in fact, occur rarely. This is likely to be partly because it is very often the case in these trials that those who might have the opportunity to introduce deviations will not have any inclination to deliberately affect the results of the trial by doing so. In addition the more complex the intervention, the more difficult it might be to practically identify such deviations. The answer “No information” will therefore be appropriate in many cases, but “Probably yes” should be used if it seems likely that such deviations occurred.
Rating
Notes/Comments:
57. 2.4 If Y/PY to 2.3: Were these deviations from intended intervention unbalanced between groups and likely to have affected the outcome?
Deviations from intended interventions that do not reflect usual practice will be important if they affect the outcome, but not otherwise. Furthermore, bias will arise only if there is imbalance in the deviations across the two groups.
Rating
Notes/Comments:
58. 2.5a Were any clusters analysed in a group different from the one to which they were assigned?
This question addresses one of the fundamental aspects of an “intention-to-treat” approach to the trial analysis: that clusters are analysed in the groups to which they were assigned through randomization. If some groups did not receive or implement their assigned intervention, and such clusters were analysed according to intervention received, then the balance between intervention groups created by randomization is lost.
Rating
Notes/Comments:
59. 2.5b In some cluster randomized trials it may not be possible to ascertain the original cluster that individuals were in.
In some cluster randomized trials it may not be possible to ascertain the original cluster that individuals were in. This could happen, for example, when clusters split or merge or participants are not recruited and outcomes are collected from routine data. In this case a judgement will need to be made about whether the answer to this question is PY or NI
Rating
Notes/Comments:
60. 2.6. If Y/PY/NI to 2.5: Was there potential for a substantial impact (on the estimated effect of intervention) of analysing participants in the wrong group?
Risk of bias will be high in a randomized trial in which sufficiently many clusters or participants were analysed in the wrong intervention group that there could have been a substantial impact on the results. There is potential for a substantial impact if more than 5% of participants were analysed in the wrong group, but for rare events there could be an impact for a smaller proportion.
Rating
Notes/Comments:
61. Risk-of-bias judgement
See Figure 3. Suggested Algorithm for reaching risk for bias due to deviations from intended interventions (effect of assignment intervention), link: https://www.riskofbias.info/welcome/rob-2-0-tool/archive-rob-2-0-cluster-randomized-trials-2016
Rating
Notes/Comments:
62. Optional: What is the predicted direction of bias due to deviations from intended interventions?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
63. 3.1a Were outcome data available for all, or nearly all, clusters randomized?
The appropriate study population for an analysis of the intention to treat effect is all randomized patients. Note that imputed data should be regarded as missing data, and not considered as “outcome data” in the context of this question. Nearly All, (equivalently, a low or modest amount of missing data) should be interpreted as “enough to be confident of the findings”, and a suitable proportion depends on the context. For continuous outcomes, availability of data from 95% (or possibly 90%) of the participants would often be sufficient. For dichotomous outcomes, the proportion required is directly linked to the risk of the event. If the observed number of events is much greater than the number of participants with missing outcome data, the bias would necessarily be small.
Rating
Notes/Comments:
64. 3.1b Were outcome data available for all, or nearly all, participants within clusters?
The issues here are broadly as for question 3.1a In cluster-randomized trials there may be particular complexities when clusters merge, split, or disappear.
Rating
Notes/Comments:
65. 3.2 If N/PN/NI to 3.1a or 3.1b: Are the proportions of missing outcome data and reasons for missing outcome data similar across intervention groups?
Similar (with regard to proportion and reasons for missing outcome data) includes some minor degree of discrepancy across intervention groups as expected by chance. Assessment of comparability of reasons for missingness requires the reasons to be reported.
Rating
Notes/Comments:
66. 3.3 If N/PN/NI to 3.1a or 3.1b: Is there evidence that results were robust to the presence of missing outcome data?
Evidence for robustness may come from how missing data were handled in the analysis and whether sensitivity analyses were performed by the trial investigators, or from additional analyses performed by the systematic reviewers.
Rating
Notes/Comments:
67. Risk-of-bias judgement
See Figure 4. Suggested algorithm for reaching risk of bias judgments for bias due to missing outcome data, link: https://www.riskofbias.info/welcome/rob-2-0-tool/archive-rob-2-0-cluster-randomized-trials-2016
Rating
Notes/Comments:
68. Optional: What is the predicted direction of bias due to missing outcome data?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
69. 4.1a Were outcome assessors aware that a trial was taking place?
This question largely applies to studies in which participants report their outcomes themselves, for example in a questionnaire. The participant is then the outcome assessor. In individually randomized trials self-assessment may be influenced by assignment if participants are aware of their assignment. In cluster randomized trials, if participants are not aware that they are in a trial then their self-assessment cannot be affected by assignment regardless of whether they are aware of the intervention they receive or not.
Rating
Notes/Comments:
70. 4.1b If Y/PY/NI to 4.1: Were outcome assessors aware of the intervention received by study participants?
Enter No if outcome assessors were blinded to intervention status. In studies where participants report their outcomes themselves (i.e., participant-reported outcome), the outcome assessor is the study participant. In cases where outcomes are collected using routine data, the outcome assessor is the individual responsible for extracting the data.
Rating
Notes/Comments:
71. 4.2 If Y/PY/NI to 4.1: Was the assessment of the outcome likely to be influenced by knowledge of intervention received?
Knowledge of the assigned intervention may impact on participant-reported outcomes (such as level of pain), observer-reported outcomes involving some judgement, and intervention provider decision outcomes, while not impacting on other outcomes such as observer reported outcomes not involving judgement such as all-cause mortality. In many circumstances the assessment of observer reported outcomes not involving judgement such as all-cause mortality might be considered to be unbiased, even if outcome assessors were aware of intervention assignments.
Rating
Notes/Comments:
72. Risk-of-bias judgement
See Figure 5. Suggested algorithm for reaching risk of bias judgments for bias in measurement of the outcome, link: https://www.riskofbias.info/welcome/rob-2-0-tool/archive-rob-2-0-cluster-randomized-trials-2016
Rating
Notes/Comments:
73. Optional: What is the predicted direction of bias due to measurement of the outcome?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
74. Signalling Question - Are the reported outcome data likely to have been selected, on the basis of the results, from...
Rating
Notes/Comments:
75. 5.1 ... multiple outcome measurements (e.g. scales, definitions, time points) within the outcome domain?
A particular outcome domain (i.e. a true state or endpoint of interest) may be measured in multiple ways. For example, the domain pain may be measured using multiple scales (e.g. a visual analogue scale and the McGill Pain Questionnaire), each at multiple time points (e.g. 3, 6 and 12 weeks post-treatment). If multiple measurements were made, but only one or a subset is reported on the basis of the results (e.g. statistical significance), there is a high risk of bias in the fully reported result. A response of Yes/Probably yes is reasonable if: There is clear evidence (usually through examination of a trial protocol or statistical analysis plan) that a domain was measured in multiple ways, but data for only one or a subset of measures is fully reported (without justification), and the fully reported result is likely to have been selected on the basis of the results. Selection on the basis of the results arises from a desire for findings to be newsworthy, sufficiently noteworthy to merit publication, or to confirm a prior hypothesis. For example, trialists who have a preconception or vested interest in showing that an experimental intervention is beneficial may be inclined to selectively report outcome measurements that are favourable to the experimental intervention. A response of No/Probably no is reasonable if: There is clear evidence (usually through examination of a trial protocol or statistical analysis plan) that all reported results for the outcome domain correspond to all intended outcome measurements or there is only one possible way in which the outcome domain can be measured (hence there is no opportunity to select from multiple measures) or outcome measurements are inconsistent across different reports on the same trial, but the trialists have provided the reason for the inconsistency and it is not related to the nature of the results. A response of No Information is reasonable if: Analysis intentions are not available, or the analysis intentions are not reported in sufficient detail to enable an assessment, and there is more than one way in which the outcome domain could have been measured.
Rating
Notes/Comments:
76. 5.2 ... multiple analyses of the data?
A particular outcome domain may be analysed in multiple ways. Examples include: unadjusted and adjusted models; final value vs change from baseline vs analysis of covariance; transformations of variables; conversion of continuously scaled outcome to categorical data with different cut-points; different sets of covariates for adjustment; different strategies for dealing with missing data. Application of multiple methods generates multiple effect estimates for a specific outcome domain. If multiple estimates are generated but only one or a subset is reported on the basis of the results (e.g. statistical significance), there is a high risk of bias in the fully reported result. A response of Yes/Probably yes is reasonable if: There is clear evidence (usually through examination of a trial protocol or statistical analysis plan) that a domain was analysed in multiple ways, but data for only one or a subset of analyses is fully reported (without justification), and the fully reported result is likely to have been selected on the basis of the results. Selection on the basis of the results arises from a desire for findings to be newsworthy, sufficiently noteworthy to merit publication, or to confirm a prior hypothesis. For example, trialists who have a preconception or vested interest in showing that an experimental intervention is beneficial may be inclined to selectively report analyses that are favourable to the experimental intervention. A response of No/Probably no is reasonable if: There is clear evidence (usually through examination of a trial protocol or statistical analysis plan) that all reported results for the outcome domain correspond to all intended analyses. or There is only one possible way in which the outcome domain can be analysed (hence there is no opportunity to select from multiple analyses). or Analyses are inconsistent across different reports on the same trial, but the trialists have provided the reason for the inconsistency and it is not related to the nature of the results. A response of “No information” is reasonable if: Analysis intentions are not available, or the analysis intentions are not reported in sufficient detail to enable an assessment, and there is more than one way in which the outcome domain could have been analysed.
Rating
Notes/Comments:
77. Risk-of-bias judgement
See Figure 6. Suggested algorithm for reaching risk of bias in selection of the reported result, link: https://www.riskofbias.info/welcome/rob-2-0-tool/archive-rob-2-0-cluster-randomized-trials-2016
Rating
Notes/Comments:
78. Overall Risk-of-bias judgement, Optional: What is the overall predicted direction of bias for this outcome?
See Table 1. Reaching an overall risk of bias judgment for a specific outcome, link: https://www.riskofbias.info/welcome/rob-2-0-tool/archive-rob-2-0-cluster-randomized-trials-2016. If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
79. 1.1 Is there potential for confounding of the effect of intervention in this study?
In rare situations, such as when studying harms that are very unlikely to be related to factors that influence treatment decisions, no confounding is expected and the study can be considered to be at low risk of bias due to confounding, equivalent to a fully randomized trial. There is no NI (No information) option for this signalling question. If N/PN to 1.1: the study can be considered to be at low risk of bias due to confounding and no further signalling questions need be considered. If Y/PY to 1.1: determine whether there is a need to assess time-varying confounding:
Rating
Notes/Comments:
80. 1.2 Was the analysis based on splitting participants’ follow up time according to intervention received?
If participants could switch between intervention groups then associations between intervention and outcome may be biased by time-varying confounding. This occurs when prognostic factors influence switches between intended interventions. If N/PN, answer questions relating to baseline confounding (1.4 to 1.6). If Y/PY, proceed to question 1.3.
Rating
Notes/Comments:
81. 1.3 Were intervention discontinuations or switches likely to be related to factors that are prognostic for the outcome?
If intervention switches are unrelated to the outcome, for example when the outcome is an unexpected harm, then time-varying confounding will not be present and only control for baseline confounding is required.
Rating
Notes/Comments:
82. 1.4. Did the authors use an appropriate analysis method that controlled for all the important confounding domains?
Appropriate methods to control for measured confounders include stratification, regression, matching, standardization, and inverse probability weighting. They may control for individual variables or for the estimated propensity score. Inverse probability weighting is based on a function of the propensity score. Each method depends on the assumption that there is no unmeasured or residual confounding.
Rating
Notes/Comments:
83. 1.5. If Y/PY to 1.4: Were confounding domains that were controlled for measured validly and reliably by the variables available in this study?
Appropriate control of confounding requires that the variables adjusted for are valid and reliable measures of the confounding domains. For some topics, a list of valid and reliable measures of confounding domains will be specified in the review protocol but for others such a list may not be available. Study authors may cite references to support the use of a particular measure. If authors control for confounding variables with no indication of their validity or reliability pay attention to the subjectivity of the measure. Subjective measures (e.g. based on self-report) may have lower validity and reliability than objective measures such as lab findings.
Rating
Notes/Comments:
84. 1.6. Did the authors control for any post-intervention variables that could have been affected by the intervention?
Controlling for post-intervention variables that are affected by intervention is not appropriate. Controlling for mediating variables estimates the direct effect of intervention and may introduce bias. Controlling for common effects of intervention and outcome introduces bias.
Rating
Notes/Comments:
85. 1.7. Did the authors use an appropriate analysis method that adjusted for all the important confounding domains and for time-varying confounding?
Adjustment for time-varying confounding is necessary to estimate the effect of starting and adhering to intervention, in both randomized trials and NRSI. Appropriate methods include those based on inverse probability weighting. Standard regression models that include time-updated confounders may be problematic if time-varying confounding is present.
Rating
Notes/Comments:
86. 1.8. If Y/PY to 1.7: Were confounding domains that were adjusted for measured validly and reliably by the variables available in this study?
See 1.5 above
Rating
Notes/Comments:
87. Optional: What is the predicted direction of bias due to confounding?
Can the true effect estimate be predicted to be greater or less than the estimated effect in the study because one or more of the important confounding domains was not controlled for? Answering this question will be based on expert knowledge and results in other studies and therefore can only be completed after all of the studies in the body of evidence have been reviewed. Consider the potential effect of each of the unmeasured domains and whether all important confounding domains not controlled for in the analysis would be likely to change the estimate in the same direction, or if one important confounding domain that was not controlled for in the analysis is likely to have a dominant impact.
Rating
Notes/Comments:
88. 2.1. Was selection of participants into the study (or into the analysis) based on participant characteristics observed after the start of intervention?
This domain is concerned only with selection into the study based on participant characteristics observed after the start of intervention. Selection based on characteristics observed before the start of intervention can be addressed by controlling for imbalances between experimental intervention and comparator groups in baseline characteristics that are prognostic for the outcome (baseline confounding). If N/PN to 2.1: go to 2.4
Rating
Notes/Comments:
89. 2.2. If Y/PY to 2.1: Were the post-intervention variables that influenced selection likely to be associated with intervention?
Selection bias occurs when selection is related to an effect of either intervention or a cause of intervention and an effect of either the outcome or a cause of the outcome. Therefore, the result is at risk of selection bias if selection into the study is related to both the intervention and the outcome.
Rating
Notes/Comments:
90. 2.3 If Y/PY to 2.2: Were the post-intervention variables that influenced selection likely to be influenced by the outcome or a cause of the outcome?
Rating
Notes/Comments:
91. 2.4. Do start of follow-up and start of intervention coincide for most participants?
If participants are not followed from the start of the intervention then a period of follow up has been excluded, and individuals who experienced the outcome soon after intervention will be missing from analyses. This problem may occur when prevalent, rather than new (incident), users of the intervention are included in analyses.
Rating
Notes/Comments:
92. 2.5. If Y/PY to 2.2 and 2.3, or N/PN to 2.4: Were adjustment techniques used that are likely to correct for the presence of selection biases?
It is in principle possible to correct for selection biases, for example by using inverse probability weights to create a pseudo-population in which the selection bias has been removed, or by modelling the distributions of the missing participants or follow up times and outcome events and including them using missing data methodology. However such methods are rarely used and the answer to this question will usually be ‘No’.
Rating
Notes/Comments:
93. Optional: What is the predicted direction of bias due to selection of participants into the study?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
94. 3.1 Were intervention groups clearly defined?
A pre-requisite for an appropriate comparison of interventions is that the interventions are well defined. Ambiguity in the definition may lead to bias in the classification of participants. For individual-level interventions, criteria for considering individuals to have received each intervention should be clear and explicit, covering issues such as type, setting, dose, frequency, intensity and/or timing of intervention. For population-level interventions (e.g. measures to control air pollution), the question relates to whether the population is clearly defined, and the answer is likely to be ‘Yes’
Rating
Notes/Comments:
95. 3.2 Was the information used to define intervention groups recorded at the start of the intervention?
In general, if information about interventions received is available from sources that could not have been affected by subsequent outcomes, then differential misclassification of intervention status is unlikely. Collection of the information at the time of the intervention makes it easier to avoid such misclassification. For population-level interventions (e.g. measures to control air pollution), the answer to this question is likely to be ‘Yes’.
Rating
Notes/Comments:
96. 3.3 Could classification of intervention status have been affected by knowledge of the outcome or risk of the outcome?
Collection of the information at the time of the intervention may not be sufficient to avoid bias. The way in which the data are collected for the purposes of the NRSI should also avoid misclassification.
Rating
Notes/Comments:
97. Optional: What is the predicted direction of bias due to measurement of outcomes or interventions?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
98. 4.1. Were there deviations from the intended intervention beyond what would be expected in usual practice?
Deviations that happen in usual practice following the intervention (for example, cessation of a drug intervention because of acute toxicity) are part of the intended intervention and therefore do not lead to bias in the effect of assignment to intervention. Deviations may arise due to expectations of a difference between intervention and comparator (for example because participants feel unlucky to have been assigned to the comparator group and therefore seek the active intervention, or components of it, or other interventions). Such deviations are not part of usual practice, so may lead to biased effect estimates. However these are not expected in observational studies of individuals in routine care.
Rating
Notes/Comments:
99. 4.2. If Y/PY to 4.1: Were these deviations from intended intervention unbalanced between groups and likely to have affected the outcome?
Deviations from intended interventions that do not reflect usual practice will be important if they affect the outcome, but not otherwise. Furthermore, bias will arise only if there is imbalance in the deviations across the two groups.
Rating
Notes/Comments:
100. 4.3. Were important co-interventions balanced across intervention groups?
Risk of bias will be higher if unplanned co-interventions were implemented in a way that would bias the estimated effect of intervention. Co-interventions will be important if they affect the outcome, but not otherwise. Bias will arise only if there is imbalance in such co-interventions between the intervention groups. Consider the co-interventions, including any pre-specified co-interventions, that are likely to affect the outcome and to have been administered in this study. Consider whether these co-interventions are balanced between intervention groups.
Rating
Notes/Comments:
101. 4.4. Was the intervention implemented successfully for most participants?
Risk of bias will be higher if the intervention was not implemented as intended by, for example, the health care professionals delivering care during the trial. Consider whether implementation of the intervention was successful for most participants.
Rating
Notes/Comments:
102. 4.5. Did study participants adhere to the assigned intervention regimen?
Risk of bias will be higher if participants did not adhere to the intervention as intended. Lack of adherence includes imperfect compliance, cessation of intervention, crossovers to the comparator intervention and switches to another active intervention. Consider available information on the proportion of study participants who continued with their assigned intervention throughout follow up, and answer ‘No’ or ‘Probably No’ if this proportion is high enough to raise concerns. Answer ‘Yes’ for studies of interventions that are administered once, so that imperfect adherence is not possible.
Rating
Notes/Comments:
103. 4.6. If N/PN to 4.3, 4.4 or 4.5: Was an appropriate analysis used to estimate the effect of starting and adhering to the intervention?
It is possible to conduct an analysis that corrects for some types of deviation from the intended intervention. Examples of appropriate analysis strategies include inverse probability weighting or instrumental variable estimation. It is possible that a paper reports such an analysis without reporting information on the deviations from intended intervention, but it would be hard to judge such an analysis to be appropriate in the absence of such information. Specialist advice may be needed to assess studies that used these approaches. If everyone in one group received a co-intervention, adjustments cannot be made to overcome this.
Rating
Notes/Comments:
104. Optional: What is the predicted direction of bias due to deviations from the intended interventions?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
105. 5.1 Were outcome data available for all, or nearly all, participants?
‘Nearly all’ should be interpreted as ‘enough to be confident of the findings’, and a suitable proportion depends on the context. In some situations, availability of data from 95% (or possibly 90%) of the participants may be sufficient, providing that events of interest are reasonably common in both intervention groups. One aspect of this is that review authors would ideally try and locate an analysis plan for the study.
Rating
Notes/Comments:
106. 5.2 Were participants excluded due to missing data on intervention status?
Missing intervention status may be a problem. This requires that the intended study sample is clear, which it may not be in practice.
Rating
Notes/Comments:
107. 5.3 Were participants excluded due to missing data on other variables needed for the analysis?
This question relates particularly to participants excluded from the analysis because of missing information on confounders that were controlled for in the analysis.
Rating
Notes/Comments:
108. 5.4 If PN/N to 5.1, orY/PY to 5.2 or 5.3: Are the proportion of participants and reasons for missing data similar across interventions?
This aims to elicit whether either (i) differential proportion of missing observations or (ii) differences in reasons for missing observations could substantially impact on our ability to answer the question being addressed. ‘Similar’ includes some minor degree of discrepancy across intervention groups as expected by chance.
Rating
Notes/Comments:
109. 5.5 If PN/N to 5.1, orY/PY to 5.2 or 5.3: Is there evidence that results were robust to the presence of missing data?
Evidence for robustness may come from how missing data were handled in the analysis and whether sensitivity analyses were performed by the investigators, or occasionally from additional analyses performed by the systematic reviewers. It is important to assess whether assumptions employed in analyses are clear and plausible. Both content knowledge and statistical expertise will often be required for this. For instance, use of a statistical method such as multiple imputation does not guarantee an appropriate answer. Review authors should seek naïve (complete-case) analyses for comparison, and clear differences between complete-case and multiple imputation-based findings should lead to careful assessment of the validity of the methods used.
Rating
Notes/Comments:
110. Optional: What is the predicted direction of bias due to missing data?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
111. 6.1 Could the outcome measure have been influenced by knowledge of the intervention received?
Some outcome measures involve negligible assessor judgment, e.g. all-cause mortality or non-repeatable automated laboratory assessments. Risk of bias due to measurement of these outcomes would be expected to be low.
Rating
Notes/Comments:
112. 6.2 Were outcome assessors aware of the intervention received by study participants?
If outcome assessors were blinded to intervention status, the answer to this question would be ‘No’. In other situations, outcome assessors may be unaware of the interventions being received by participants despite there being no active blinding by the study investigators; the answer this question would then also be‘No’. In studies where participants report their outcomes themselves, for example in a questionnaire, the outcome assessor is the study participant. In an observational study, the answer to this question will usually be ‘Yes’ when the participants report their outcomes themselves.
Rating
Notes/Comments:
113. 6.3 Were the methods of outcome assessment comparable across intervention groups?
Comparable assessment methods (i.e. data collection) would involve the same outcome detection methods and thresholds, same time point, same definition, and same measurements.
Rating
Notes/Comments:
114. 6.4 Were any systematic errors in measurement of the outcome related to intervention received?
This question refers to differential misclassification of outcomes. Systematic errors in measuring the outcome, if present, could cause bias if they are related to intervention or to a confounder of the intervention-outcome relationship. This will usually be due either to outcome assessors being aware of the intervention received or to non-comparability of outcome assessment methods, but there are examples of differential misclassification arising despite these controls being in place.
Rating
Notes/Comments:
115. Optional: What is the predicted direction of bias due to measurement of outcomes?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
116. 7.1. ... multiple outcome measurements within the outcome domain?
For a specified outcome domain, it is possible to generate multiple effect estimates for different measurements. If multiple measurements were made, but only one or a subset is reported, there is a risk of selective reporting on the basis of results.
Rating
Notes/Comments:
117. 7.2 ... multiple analyses of the intervention-outcome relationship?
Because of the limitations of using data from non-randomized studies for analyses of effectiveness (need to control confounding, substantial missing data, etc), analysts may implement different analytic methods to address these limitations. Examples include unadjusted and adjusted models; use of final value vs change from baseline vs analysis of covariance; different transformations of variables; a continuously scaled outcome converted to categorical data with different cut-points; different sets of covariates used for adjustment; and different analytic strategies for dealing with missing data. Application of such methods generates multiple estimates of the effect of the intervention versus the comparator on the outcome. If the analyst does not pre-specify the methods to be applied, and multiple estimates are generated but only one or a subset is reported, there is a risk of selective reporting on the basis of results.
Rating
Notes/Comments:
118. 7.3 ... different subgroups?
Particularly with large cohorts often available from routine data sources, it is possible to generate multiple effect estimates for different subgroups or simply to omit varying proportions of the original cohort. If multiple estimates are generated but only one or a subset is reported, there is a risk of selective reporting on the basis of results.
Rating
Notes/Comments:
119. Optional: What is the predicted direction of bias due to selection of the reported result?
If the likely direction of bias can be predicted, it is helpful to state this. The direction might be characterized either as being towards (or away from) the null, or as being in favour of one of the interventions.
Rating
Notes/Comments:
120. Optional: What is the overall predicted direction of bias for this outcome?
Rating
Notes/Comments:

Results

Categorical


Quit attempt (made between baseline and follow-up)

All Participants
Descriptive StatisticsBetween Arm Comparisons
Brief MINRT sampling referral-only
1 month
Total (N analyzed)
Odds Ratio (OR)
Events
95% CI low (OR)
Percentage
95% CI high (OR)
p value
Within Arm ComparisonsNet Comparisons
Brief MINRT sampling referral-only

Continuous


Reduction in cigarette consumption (cigarettes smoked per day)

All Participants
Descriptive StatisticsBetween Arm Comparisons
Brief MINRT sampling referral-only
Baseline
p value
Mean Difference (MD)
Total (N analyzed)
95% CI low (MD)
Mean
95% CI high (MD)
SD
SD (MD)
p value (MD)
1 month
p value
Mean Difference (MD)
Total (N analyzed)
95% CI low (MD)
Mean
95% CI high (MD)
SD
SD (MD)
p value (MD)
Within Arm ComparisonsNet Comparisons
Brief MINRT sampling referral-only