PROM Detail

Matched Pair instrument (PATIENT VERSION)
  • Basic Information
  • Detailed Information
  • Domains
  • Psychometrics

Basic Information

Abbreviated name
MPI
Full name
Matched Pair instrument (PATIENT VERSION)
Items ?
The number of questions in the survey
19
Short description
This measure uses 19 items to capture information from the patient about how their visit with a doctor unfolded. It focuses on both the process aspects of the visit (e.g. patient greeting, listening, and understanding) as well as the content of the visit (e.g. explanations, treatment options, next steps). There is a patient and doctor version and because the item number is relatively low, it can be completed at the end of the visit without causing too much burden to the patient.
PCCC or QoL? ?
This compendium contains patient-reported measures that are either designed to specifically measure aspects of Person Centred Co-Ordinated Care (P3C), or alternatively tools that are designed to measure some aspect of Quality of Life (QoL) or Health Related Quality of Life (hrQoL). All the measures in this compendium have been broadly categorised into one of those two concepts.
Person Centred Coordinated Care
Main Domains Measured ?
This is the key domains that the measure is targeting.
Communication skills
Type of measure ?
The measures in this compendium can take a variety of forms. Generally, they will be either Patient Reported Outcome Measure (PROM) or Patient Reported Experience Measure (PREM). However, we have also included a few measures that are completed by proxy-individual (PROXY), which are useful in instances where the respondent cannot answer directly (e.g. dementia or end of life). Sometimes, these measures can even be a composite of these types, and target both experiences and outcomes – we have labelled these measures “PROEMs”.
PREM
Respondent ?
The person that fills in the questionnaire - e.g. patient, Health Care Professional, or proxy (normally a carer or family member)
Patients

Detailed Information

Year developed ?
The year in which the measure was first published.
2007
Country developed in ?
The main country[s] in which the measure was first developed.
Canada
Original publication ?
The publication in which the measure was originally published.
Search Citations of Original Reference
Target condition ?
The measures can be either generic or disease specific (e.g. Diabetes, Heart Failure)
Generic
Main context tested in ?
The main context in which the measure has been developed and used (E.g. Hopital, General Practice etc).
Primary care
Main countries used in ?
The main countries in which the measure has been developed and used.
Canada
Target age ?
e.g. Adults, Children, Elderly
Adults
Main uses of measure ?
The context in which the measure is most often used – e.g. clinical trials; national surveys.
Assessment of communication skills between patient and physician.
Used in UK? ?
Whether the instrument has been tested and validated within a UK healthcare context.
No
Impact ?
A crude indication of the impact of the measure on academia. This is the number of times the original publication has been cited on PubMed, divided/normalised to the years since publication.
1
Language
English

Domains

Domain description
Communication skills
100%
80%
60%
40%
20%
0%
6
2
3
4
12
1
Empowerment/activation
Generic care planning
Shared decision making
Behaviour and communication skills
Information sharing
Medication

Psychometrics

Brief description ?
A brief description of the initially reported psychometric properties of the measure.
The original development paper reported that a principle components analysis indicated that 2 factors, process and content, accounted for 52% and 7% of the doctor variance and 60% and 6% of the patient variance, respectively. The linear regression showed that only gender accounted for any of the variance in ratings. Cronbach's alphas for both doctor and patient questionnaires were > or = 0.96. The G analysis provided a G = 0.98 and 0.40 (standard errors of 0.003 and 0.02) for doctors and patients, respectively. A more recent study (Stoddard, 2014) reported that The MPI demonstrated construct validity when it was used with practicing physicians; however, its validity was found to be sensitive to both the medical context (e.g. inpatient setting) and social context (e.g. adults, English speaking patients) in which it is used. Acceptability reliability was only established when a large number of responses were available.
COSMIN/Terwee/EMPRO/OXFORD ?
This is the method used to systematically review the psychometric properties of the instrument - either the COSMIN criteria, Terwee's criteria, EMPRO, or the custom methodology of the Oxford PROMS group.
Setting ?
This the setting in which the psychometrics were evaluated. This field also contains a link to the original study where the systematic ratings (showing in the diagram to the right) of the psychometric data was performed. The data given below is for indicative purposes only, and users should refer to the original source when seriously reviewing measures.
Internal consistency ?
Internal consistency evaluates how individual items of the outcome measure correlate with each other. The quality criteria to assess internal consistency is Cronbach’s alpha, which reports the average of correlations between all possible halves of the scale. A high internal consistency (>0.8) suggests that many items of the measure are capturing similar aspects. Internal consistency is important if an outcome measure is used to monitor a single underlying concept with multiple items. However, if the underlying clinical phenomenon is complex, internal consistency is not so relevant or may be reported as sub-scales of the instrument.
Reliability ?
The reliability of an outcome measure refers to whether the measure produces the same or similar results when administered in unchanged conditions. Reliability is important as it can reduce measurement error or errors that are related to the process of measurement. Providing clear definitions for the scores from an outcome measure helps to make it more reliable. Fewer points on the scale also improves reliability. It can be assessed via inter-rater reliability (whether similar results are reached when different observers are used to rate the same situation or patient) or rest-retest reliability (whether similar results are reached over two distinct periods of time in unchanged conditions).
Measurement error ?
The difference between a measured value of quantity and its true value.
Content validity ?
Validity is one of the most important aspects of an outcome measure. It refers to what a tool is measuring and whether it is measuring what it should be measuring.
Construct structural validity ?
Construct validity is the extent to which a measurement corresponds to the theoretical concepts or constructs that it was designed to measure. It can be assessed via statistical evaluations of the structure of the measurement, such as factor analysis. Correlations that fit the expected pattern contribute evidence of construct validity.
Construct hypothesis testing ?
If no other measure or gold standard exists for comparison, the measure could be linked to a theory or hypothesis in order to show construct validity.
Construct cross cultural validity ?
To be able to use outcome measures with different groups to compare results between countries, outcome measures need to be translated into other languages by following a formal process and the same rigorous validation process also applies as for the original measure. Even though this is lengthy and costly, it is an important procedure to ensure accurate scores when outcome measures are used and compared.
Criterion validity ?
Criterion validity refers to whether the measure correlates with another instrument that measures similar aspects. Preferably, the other instrument is the ‘gold standard’, meaning it has been validated, and is widely used and accepted in the field.
Responsiveness ?
Responsiveness to change refers to whether the measure can detect clinically important changes over time that are related to the course of the disease or to an intervention, such as symptom management.
Interpretability ?
The interpretability of an outcome measure refers to whether the results (which are often a number) can be translated into something more meaningful to the patient, the family or clinician. An interpretable tool should enable a response to these questions: What is severe? What is the cut-off point when the outcome measure is used for diagnosis? How many points correlate with a symptom change?
Floor ceiling effects ?
Floor and ceiling effects occur when scores from an outcome measure are not discriminated below or above a certain level (meaning that they will not detect change). This can be a particular problem in conditions with a very wide range of symptoms.
Terwee
1
0
1
-1
0
0
0
0

About psychometric information: The above information on the psychometric properties of the questionnaire are for guideline purposes only.  They were, however, derived from a systematic review of measurement properties for this tool.  You can find the original study above, when clicking on the “setting” field. To derive a consistent scoring system from various different types of studies, we often had to re-map scores/categories, so that the information above may not accurately reflect the original source.  Please refer to original references for complete information.

Key: 0-3 scoring system:
Empty = study did not assess this evidence
-1 = negative data
0 = conflicting/no data
1: Weak evidence
2: Moderate evidence
3: Strong evidence
-1
0
1
2
3