Reliability and Validity of the Brisbane Evidence-Based Language Test

The Brisbane EBLT has undergone psychometric analysis in two statistically-powered multi-site studies examining the validity (including diagnostic test cut-off scores) and reliability of this measure.  Both studies adhere to EQUATOR network guidelines (Enhancing the Quality and Transparency Of health Research) ensuring the rigorous reporting of sample selection, study design and statistical analysis in health and medical research.

Brisbane Evidence-Based Language Test citation:

Alexia Rohde, Suhail A. Doi, Linda Worrall, Erin Godecke, Anna Farrell, Robyn O’Halloran, Molly McCracken, Nadine Lawson, Rebecca Cremer & Andrew Wong (2020) Development and diagnostic validation of the Brisbane Evidence-Based Language Test, Disability and Rehabilitation, DOI: 10.1080/09638288.2020.1773547

Brisbane Evidence-Based Language Test Psychometrics

The reliability and validity psychometric publications are both open access and are available in full (without subscription) from the links below: 

Brisbane EBLT Test Development and Validity (diagnostic accuracy)

Development and diagnostic validation of the Brisbane Evidence-Based Language Test

This STARD-compliant publication outlines the test development and reports the cut-off scores indicative of language impairment.  A user-friendly guide to interpreting the cut-off scores is provided on the Tests page (download the ‘Administrative & Scoring Guidelines’ and go to pages 3-4).  Please note, the cut-off scores have not been reported on the test forms themselves.

Citation: Alexia Rohde, Suhail A. Doi, Linda Worrall, Erin Godecke, Anna Farrell, Robyn O’Halloran, Molly McCracken, Nadine Lawson, Rebecca Cremer & Andrew Wong (2020) Development and diagnostic validation of the Brisbane Evidence-Based Language Test, Disability and Rehabilitation, DOI: 10.1080/09638288.2020.1773547

Brisbane EBLT Reliability Analysis

Inter-rater reliability, intra-rater reliability and internal consistency of the Brisbane Evidence-Based Language Test 

This GRRAS-compliant publication outlines the reliability of this new measure and reports the consistency of test scores when obtained from different clinicians and the same clinician at different times.

Citation: Alexia Rohde, Molly McCracken, Linda Worrall, Anna Farrell, Robyn O’Halloran, Erin Godecke, Michael David & Suhail A. Doi (2020) Inter-rater reliability, intra-rater reliability and internal consistency of the Brisbane Evidence-Based Language Test, Disability and Rehabilitation, DOI: 10.1080/09638288.2020.1776774

 

Underpinning Systematic Review

Diagnosis of aphasia in stroke populations: A systematic review of language tests.

This open access PRISMA-compliant systematic review underpins the development of the Brisbane EBLT.  This review examines the diagnostic capabilities of existing speech pathology language tests and demonstrates the rationale behind creating the new aphasia test.

Citation: Alexia Rohde, Linda Worrall, Erin Godecke, Robyn O’Halloran, Anna Farrell & Margaret Massey (2018) Diagnosis of aphasia in stroke populations: A systematic review of language tests. PLoS ONE 13(3): e0194143. https://doi.org/10.1371/journal.pone.0194143

Evidence-Based Foundations of the Brisbane EBLT

The Brisbane EBLT has been developed based on the four components of Evidence-Based Practice (Straus et al., 2011) incorporating: clinically relevant research evidence, clinical experience, clinical context and patient perspectives. 

Clinically Relevant Research Evidence
The Brisbane EBLT has undergone psychometric analysis and been shown to be both a reliable and valid assessment based on the findings of research evidence.

Clinical Experience
Tests have been developed based on the existing foundation of skills and professional knowledge base of speech pathologists from both clinical and research fields.

Clinical Context
All versions of the Brisbane EBLT have undergone extensive piloting to ensure its feasibility for use in the acute hospital environment with a focus on being user-friendly, quick and easy to administer, score and interpret.

Patient Perspective
Brisbane EBLT test items have all received patient and family feedback and review and have been designed to be sensitive to patient and family needs in the acute post-stroke period.

Clinically Relevant Research

Clinical Context

All version of the Brisbane EBLT have undergone psychometric analysis in three separate research studies. The tests have undergone analysis of:

Test Validity (diagnostic accuracy) – The ability of the Brisbane EBLT to identify acute post-stroke language deficits was assessed in a STARD compliant cross-sectional study at the Royal Brisbane and Women’s Hospital and Princess Alexandra Hospital in Brisbane, Australia.

Test validity was determined by comparing acute stroke patient performance on the Brisbane EBLT with their performance on a ‘gold-standard’ language battery to determine the sensitivity and specificity of the new Brisbane EBLT test. Overall, this study equated to 100 Brisbane EBLT test ratings.

Inter-Rater Reliability – The inter-rater reliability of the Brisbane EBLT was examined by comparing the Brisbane EBLT test scores of different speech pathologists when they scored the performance of the same stroke patients.

For this study, 15 speech pathologists of different experience levels were required to score the performance of the same 15 acute stroke patients. Comparison of these test scores determined the inter-rater reliability of Brisbane EBLT test items. Overall, this study equated to a total of 225 Brisbane EBLT test ratings.

Intra-Rater Reliability – The consistency of clinician Brisbane EBLT scores over time was examined by comparing the scores of 2 speech pathologists when they re-scored the same patient performance after a two week interval. Comparison of the test scores over time determined the intra-rater reliability of the Brisbane EBLT (total of 140 test ratings).

Patient Perspective

As part of the test development acute stroke patients and their family members were asked to provide feedback on the test items. In total, 74 patients and family members provided feedback on the test. This feedback was incorporated in the test development process.

To control for cultural bias within test items, the test was piloted with participants originating from the following English-speaking countries: Australia, New Zealand, England, Scotland, United States of America, Canada and South Africa. Any items with reported cultural bias were eliminated from the assessment.

The Brisbane EBLT validation was conducted with acute stroke patients within the first two months of stroke recovery. To determine feasibility of the new test for use in this acute hospital context the test underwent significant piloting within this clinical environment.

The Brisbane EBLT was administered and trialled with acute stroke patients (n = 10 patients). During this phase the test was significantly refined and shortened. Any test items which were ambiguous in nature, difficult to score or not applicable to the acute context (e.g. too lengthy) were excluded from the assessment.

Following test refinement, psychometric data collection (n = 100 patients) was also completed at the patient’s bedside in the acute hospital setting. In order to replicate acute post-stroke populations, patients both with and without language deficits were included within the patient sample.

To examine the translational capabilities of the Brisbane EBLT, following psychometric data collection all 5 versions of the Brisbane EBLT were piloted within two acute hospital speech pathology departments, the Royal Brisbane and Women’s Hospital and Princess Alexandra Hospital in Brisbane.

Clinical Experience

Speech pathologist clinical knowledge and expertise has been instrumental in guiding the development of the Brisbane EBLT.  The Brisbane EBLT framework was guided by the structure of existing informal language measures (n = 44 measures) collated from speech pathologists from across Australia and internationally from acute post-stroke clinical practice.

During the development of the Brisbane EBLT, test items underwent significant clinical and research peer review and feedback. In total, test items were reviewed by over 108 speech pathologists from across Australia. Clinicians with expertise in both research and clinical fields were included within this sample.

In addition, during psychometric data collection all clinicians involved in the research studies (n = 16) provided additional feedback on test items from the Complete Brisbane EBLT.

Following psychometric data collection, speech pathologists (n = 11 clinicians) from the Royal Brisbane and Women’s Hospital provided feedback on the split versions of the Brisbane EBLT.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009) Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med 6(7): e1000097. https://doi.org/10.1371/journal.pmed.1000097

Straus, S., Glasziou, P., Richardson, S. & Haynes, R. B. (2011).  Evidence based medicine 4th Ed.  How to practice and teach it. Churchill Livingstone.