QH Accreditation

TEST YOUR ORGANIZATION

News

Accredited Organisations

FORM HEALTH ORGANIZATION

EDIT HEALTH ORGANIZATION

HEALTH ORGANIZATION

Self-Assessment questionnaire

Self-Assessment RESULT

Self-Assessment CONFIRMATION

Contact

Privacy Policy

Cookies Policy

PARTICULAR CONDITIONS OF THE WEB PAGE

Disclaimer

SEARCH RESULT

SITEMAP

REQUIREMENTS

LOG IN

My Profile

EDIT ACCOUNT

NEW ACCOUNT FORM

RESET PASSWORD

INCIDENCE

REPORT AN INCIDENCE

FOLLOW YOUR INCIDENCE

GESTIÓN DE INCIDENCIAS

MY INCIDENCES

ENVIAR CORREO ELECTRÓNICO

RESPUESTA CORREO ELECTRÓNICO

LISTA SOLICITUDES

GESTIONAR SOLICITUD

LISTA USUARIOS

LISTA ACREDITADOS

REGISTRO DE ACCESOS

HISTÓRICO ORGANIZACIONES

VIDEO LIBRARY

WHO ARE THEY?

Professionals

Confirmation

QH ACCREDITATION. METHODOLOGY

Listed below is a summary outlining the preliminary phase of the Delphi Study, its methodological development, and the results obtained for the Synthetic Quality Indicator.

The SQI was produced using qualitative methodology, by carrying out a Delphi Study. The initial questionnaire was devised according to the following methodology:


Data analysis

The following data sources were researched:

    • Ad hoc” bibliographic searches for:"
      Vanguard tendencies and good quality management practices in the services sector, particularly in healthcare.
    • Review of synthetic quality indicators common in Spanish healthcare, at an international level and in the OECD. The search included the following sources, among others: 
      • Quality surveys conducted by the Ministry of Health, Social Services and Equality.
      • Quality surveys and systems used in the different Regional Autonomous Communities.
      • European Hospital and Healthcare Federation (HOPE).
      • European Commission´s hospitals in transition.
      • The World Health Organisation European Observatory.
      • European Health Management Association.
      • The Organisation for Economic Cooperation and Development (OECD).
      • The National Federation of Hospitals.
      • Health systems from the United Kingdom, France, Canada and the United States of America
  • Information provided by IDIS:
    • Previous studies
    • Group of experts identified

 

Phases of the study

The study was organised in three phases, preceded by a prior organisational or launch phase and a transversal phase of information and direct communication with panellists.

Click on the links for more information on the different phases of the study.

PHASE 1: ANALYSIS AND FORMULATION

The following activities were conducted in this phase: 

  • A literature review of quality recognition, certification and accreditation systems used in the health sector.
  • A documentary analysis of the literature.   
  • An analysis and summary of the dimensions, criteria and levels of measuring and weighting quality used in each of the systems included in the study, identifying their main characteristics. 
  • Identification of potential expert participants in the study. These experts were not selected randomly: The Spanish Society for Quality in Healthcare and IDIS, proposed panellists with prominent knowledge and expertise on healthcare quality systems. Experts were identified from different professional and geographic areas in an attempt to ensure most of the Regional Autonomous Communities were represented.  
  • Formulation of the questionnaire for the implementation of the Delphi method. The consistency of the results was guaranteed by ensuring the questions contained no bias, starting from the appropriate selection of the information from the literature review. The questions were worded in a clear, concise and direct manner so as to be easily understood by the experts and not influence their answers. A pilot study was conducted with the Management Committee.
  • Presentation of the questionnaire to the group of endorsing experts, who tested and enriched the content with their input, verifying its validity

 

Analysis of healthcare quality systems

The table below groups into different categories the main characteristics extracted from each quality recognition, certification and accreditation system used in the healthcare sector.

Prototype of the table completed for each accreditation system

 

QUALITY SYSTEM

A1

Type of Quality System

A2

Characteristics and requirements of the agency

A3

Assessment methods used

A4

Components which monitor the system

B1

Implementation policy and strategy

B2

Scope and % of the services covered

B3

Maturity of the system

 

 

Summary of best quality practices

A table like the prototype above was completed for each of the quality systems included in the study. This analysis of each Quality System (QS) was used in the preparation of the initial questionnaire.

Initially, the idea was for the panellists to give a direct score to each system. Later, the approach was changed to avoid bias. Instead of asking explicitly about the different QS, it was judged more pertinent to use the dimensions and characteristics which define them.

 

Expert group

Panellists were identified from among those who complied with the following criteria:

  • Expert professionals from the healthcare sector (public or private) with renowned prestige in the field of quality management, extensive knowledge, high levels of practical experience and a high impact contribution to the scientific community in this area. Due to the fact that in our Health System management is decentralised and is the responsibility of the Regional Autonomous Communities, in addition to the aforementioned criteria, representation from all regions was sought by including panellists from each one. Scroll the table for a complete list. Haciendo scroll en la tabla podrá obtener la información completa.

Autonomous Community

Entity to which it belongs

Name

Surname

Andalusia

Agencia de Calidad de Andalucía

Antonio

Torres Olivera

Andalusia

Agencia de Calidad de Andalucía

Victor

Reyes Alcazar

Aragon

Agencia Nacional de Evaluación de la Calidad y Acreditación (ANECA)

José María

Abad Diez

Madrid

Aliad

Julio

González Bedia

Madrid

ASISA

Carlos

Zarco Alonso

Canary Islands

Asociación Canaria de Calidad Asistencial

Asistencial  (ACCA)

Ángel

Hernández Borges

Asturias

Asociación de Calidad Asistencial del Principado de Asturias(PASQAL)

Fernando

Vázquez Valdés

Madrid

Asociación Española para la Calidad (AEC)

Enrique

González María

Madrid

Asociación Madrileña de Calidad Asistencial (AMCA)

Susana

Lorenzo Martínez

Basque Country

Asociación Vasca para la Calidad Asistencial (AVCA-AKEB)

Alberto

Colina Alonso

Catalonia

Fundación Avedis Donabedian (FAD) 

Genís

Carrasco Gomez

Basque Country

Clínica Igualatorio Médico Quirúrgico (IMQ) Zorrotzaurre

Nicolás

Guerra Zaldúa

Castille-Leon

Complejo Asistencial de Salamanca

Paz

Rodríguez Perez

Madrid

Subdirección General de Calidad. Consejería de Sanidad.

Alberto

Pardo Hernández

Madrid

DNV GL

Carlos

Navarro Bilbao

Valencia

ERESA Grupo Médico

Antonio

Mollá Bau

Madrid

Fundación Ad Qualitem (FAQ)

Joaquín

Estévez Lucas

Valencia

Fundación Avedis Donabedian (FAD)  

Rosa

Suñol Salas

La Rioja

Fundación Hospital Calahorra

Pelayo

Benito García

Madrid

Grupo Hospitalario Quirón

Paloma

Leis García

Madrid

Grupo Vithas

Ángel

Caicoya de Urzaiz

Madrid

HM Hospitales

Celia

Moar Martínez

Extremadura

Hospital San Pedro de Alcántara

Isabel

Tovar García

Cantabria

Hospital Universitario Marqués de Valdecilla

Concepción

Fariñas Alvarez

Madrid

Hospital de Guadarrama.

Rosa

Salazar de la Guerra

Balearic Islands

Servei de Salut de les Illes Balears

Carlos

Campillo Artrero

Asturias

Hospital Monte Naranco

Alberto

Fernández León

Madrid

Hospital Puerta de Hierro

Dolors

Montserrat Capella

Madrid

IDC Salud

Celia

García Menéndez

Basque Country

InnovaSalud

Óscar

Moracho del Rio

Catalonia

Instituto Catalán de Oncología

Jordi

Trelis i Navarro

Madrid

Instituto Nacional de Gestión Sanitaria

Mª Antonia

Blanco Galán

Madrid

NISA Hospitales

Mª Carmen

Abarca Torralba

Basque Country

Osakidetza

Susana

Candela Casado

Basque Country

Hospital Galdakoa-Usansolo

Santiago

Rabanal Retolaza

Madrid

Sanitas

Luis

Delgado Cabezas

Castilla-La Mancha

Servicio de Salud de Castilla-La Mancha

Jesús

Fernández Sanz

Murcia

Servicio Murciano de Salud

José Manuel

Alcaraz Muñoz

Andalusia

Sociedad Andaluza de Calidad Asistencial

Reyes

Álvarez - Ossorio García de Soria

Catalonia

Sociedad Catalana de Calidad Asistencial

Ángel

Vidal Milla

Castilla-La Mancha

Sociedad de Calidad Asistencial Castilla - La Mancha

Cesar

Llorente Parrado

Andalusia

Sociedad Española de Calidad Asistencial

José Manuel

Martín Vázquez

Andalusia

Sociedad Española de Calidad Asistencial

Emilio

Ignacio García

Aragón

Sociedad Española de Calidad Asistencial

Pilar

Astier Peña

Catalonia

Sociedad Española de Calidad Asistencial

Manel

Santiña Vila

Galicia

Sociedad Gallega de Calidad Asistencial

Mercedes

Carreras Viñas

Murcia

Sociedad Murciana de Calidad Asistencial

Rafael

Gomis Cebrián

Valencia

Sociedad Valenciana de Calidad Asistencial  (SOVCA)

Tomás

Quirós Morato

Valencia

Universidad Miguel Hernández.

José Joaquín

Mira Solves

Experts participate in the study Delphi

 

 

QUESTIONNAIRE DESIGN

The questionnaire was based on both the literature review and the information extracted from each model. The tool was initially tested by the project management team and the experts from the endorsing group. A final version of the questionnaire was then produced in Excel format.

The questionnaire used for the first round was of the semi-closed type to enable the experts to add input.

The questionnaire consisted of two sections: 

  • The first section  was aimed at rating the  conceptual framework (area A)  and the operational framework (area B) of any potential Quality System implemented by a health centre. The conceptual framework encompassed four dimensions (A1, A2, A3 and A4) and the operational framework three (B1, B2 and B3), totalling seven. This first round section included a total of  sixty-nine attributes, fifty-six of which were related to area A and thirteen to area B.
  • El The second section  was intended to obtain a preliminary overall assessment of an initial list of the Quality Systems commonly used by health institutions. The panellists were asked to add any QS not included which they regarded as relevant.

 

Section 1

AREA A: Conceptual framework of the QS

AREA B: Operational framework or implementation mode of the QS

A1: Type of QS

B1: Policy and strategy for the implementation of the QS

A2: Characteristics and requirements of the Quality System agency

B2: Scope and % of services covered by the QS

A3: Assessment methods used by the QS

B3: Maturity of the QS

A4: Components monitored by the QS

 

Section 2

Score of selected systems

Table 1: Summary of sections, areas and dimensions of the questionnaire

 

In order to rate the different systems:

  • Each panellist assigned a score from 0 to 10 (0 as a minimum and 10 as a maximum), to each of the attributes included in the four dimensions of the conceptual framework(A) and the three dimensions of the operational framework (B) of the Quality System.
  • After rating each of the items in the 7 dimensions, panellists were asked to define the order of priority of each attribrute in relation to the others of the particular dimension: For a dimension with 5 items values would range from 1 to 5, from 1 to 7 for 7 items and so on. The fact that 1 represented top priority led to a certain “contamination” in relation to the experts’ scoring (amended with the pertinent explanations). This situation arose because scores for each attribute were registered inversely (0 as a minimum and 10 as a maximum) to the weightings, which assessed the relative importance of one attribute compared with another from the same dimension, where1 was the top priority.
  • Once the scoring for this first section had been completed, the panellist was asked to address the second, giving a weighted score to each of the Quality Systems included in the questionnaire.
  • All the dimensions in the first round included a discriminative assessment of the level of difficulty encountered by the panellist when completing the questionnaire.

Experts were asked to take the following assumptions into account when scoring:

  • A priori, no model is better than another.
  • All the institutions which have been awarded external accreditation, certification or recognition have endeavoured to improve the quality of their healthcare provision.
  • There is a significant case-based variability in the Quality Systems, which requires constant updating, and which makes all lists run the risk of becoming obsolete in a short period of time.

 

PHASE 2: DELPHI METHOD

To carry out the study, a Delphi Method was decided upon. This method integrates qualitative and quantitative research aspects. It can be applied for some aspects without the need for reaching statistical significance, which is only required for weighting the opinions.

The achievement of objectives using the Delphi method for any given subject depends on two crucial factors, which were carefully observed in this study (Astigarraga E, 2006):

  • The correct choice of panellists.
  • The proven validity of the questionnaire.

The method was implemented following recommended guidelines found in the literature for its optimal performance (Helmer O, 1983; Landeta J, 2006; Gordon T, 2007):

  • Anonymity: No contact, identification or acquaintance should exist among panellists. 
  • Controlled feedback: The full results of the previous round are not disclosed to the panellists; only those on which no consensus has been reached are circulated.
  • Report of group response statistics: All data is presented in a table stating the means and degree of dispersion, to each of the experts.

 

PARTICIPANT RECRUITMENT

The invitation to participate in the study was conducted in two phases:

  • In the first phase each panellist was contacted individually , to explain::
    • The importance and main characteristics of the study and its potential impact on the Spanish Health System.
    • The objectives and scope of the project.
    • The method used: The expected number of rounds, the time required to answer the questionnaire, the approximate duration of the study, the guaranteed anonymity of answers.
    • The type of candidate sought, in addition to the reasons for which they were selected and the expected benefits arising from their participation.
    • The potential use and impact of the study and the possible publication of the results, featuring the panellist’s name, for which authorisation was requested. 
  • Once telephone confirmation from the panellists was obtained, the Managing Director of IDIS sent the questionnaire to each expert, with a study presentation letter formalising their participation and thanking them for their participation.

 

Dispatch of Questionnaires

Development of the Delphi study

The time for the execution of the study was initially estimated at three months. The time between issuing the first questionnaire and the receipt of the second was fifty-six days, as illustrated in Table 2:

 

 

 

Date of Remittal

Deadline

Last Panellist

1st Round

April 30th

May 16th

May 25th

2nd Round

June 11th

June 17th

June 26th

Lagging panellists: 10 - 15 days is regarded as acceptable

Table 2: Time taken for the first and second rounds

 

In the literature, Delphi studies conducted in less than two months are considered as an optimal result. In the first round participants are usually given three weeks to respond. This period was reduced to sixteen days in this study, even though these dates included several public holidays (the first few days in May).

The initial conception of the study was compared with the results achieved in the practical implementation of the method, which may be summarised as follows:

  • Although three rounds were initially planned for the execution of the study, only two were required in the end.
  • The dropout rate was minimal in both rounds and well below that regarded as standard or acceptable in literature.

Literature establishes a direct relationship between the dropout rate and the following aspects in studies using the Delphi method:

  • The time required to complete the questionnaire.
  • The total time of all the rounds required to execute the study.

The greater the time taken to complete these two factors, the more experts drop out of the study.

Of the 51 panellists selected, 2 dropped out (one of which on receipt of the questionnaire); only 4 of the remaining 49 failed to return a completed questionnaire, leaving a total of 45 participants (91.84% of the participating panellists) in the first round of Delphi.

Table 3 illustrates the number of panellists who actually participated and those who dropped out in each round.

 

 

1st round of Delphi

2nd round of Delphi

Number of panellists

49

45

Panellists who dropped out

4

1

Table 3: Summary of the participants in the first and second rounds of Delphi.

 

The table below summarises the comparison between the initial planning of the study and the actual execution of the same in terms of rounds, dropout rates and responses planned and obtained for each round.

Figure 4: A comparison of the initial planning of the study with the actual execution of the same

 

The response rate obtained was over 90%. This rate significantly enhances the validity of the study (Gordon T, 1993).

The number of attributes included in each of the rounds, for the areas in both sections 1 and 2, are illustrated in Table 5.

Section

Area

Dimension

Nº of Attributes Round 1

(∑69)

Nº of Attributes Round 2

(∑32)

1

A

Conceptual framework

A1: Type of Quality System

7

4

A2: Characteristics and requirements of the Quality System agency

10

5

A3: Assessment methods used by the Quality System

15

6

A4: Components monitored by the Quality System

24

11

B

Operational Framework

B1: Policy and strategy for the implementation of the Quality System

5

2

B2: Scope and % of services covered by the Quality System

5

2

B3: Maturity of the Quality System

3

2

2

Scores of the pre-selected systems

48 options, with open questions to enable the panellists to rate the additional systems they have proposed.

Only those rated by at least 5 panellists have been included.

Those which failed to correspond to the objectives of the study were discarded, leaving a final total of 34 options to rate.

Table 5: Comparison of the number of attributes included in the first and second rounds.

The difference between the rounds led to a reduction in attributes.

In section 1, attributes which were given a full consensus in the first round, in relation to both maximum and minimum scores, were not included in the second round. The number of attributes dropped from the initial 69 to 32.

In section 2 of the questionnaire, of the forty-eight systems assessed in the first round, only those rated by at least 5 panellists were included in the second round. Furthermore, those which represented an accreditation entity rather than a system were eliminated. Only two panellists proposed the inclusion of new Quality Systems in the section but these were ruled out because the required minimum 5 endorsements were not reached.

The results in section 2 were inconclusive due to a reduction in the response rate obtained with regard to rating the dimensions and due to higher scores being assigned to systems with which the panellists were more familiar, as illustrated by the results obtained in the first round.

 

CHARTING AND ANALYSIS OF RESULTS

Results from the first and second rounds

A descriptive statistical analysis was conducted on the scores of the importance placed on attributes by the panellists. Parameters of position (mean, median and mode) and dispersion (interquartile range, standard deviation and percentiles) were calculated to determine the symmetry and concentration of the distribution around a mean value, of the experts´ importance ratings.

The attributes in each dimension were grouped based on the values of the mean and median and the dispersion of the scores.  The median was considered the determining value, although all parameters were taken into account.

Two groups of attributes in each dimension were obtained as a result of this process: those for which a consensus had been reached (either regarded as the most important or as the least important) and those for which a second round of the Delphi study was required to confirm their importance score.

The results of the first round were colour coded to identify the consensus status of the items, and whether consensus was reached on their high or low importance.

Table 6 illustrates the colour code used to group the different attributes in accordance with the score obtained in the first round:

 

Attributes with high consensus in relation to their high importance

 

Attributes with high consensus in relation to their low importance

 

Attributes to be rated in the second round with higher average scores

 

Attributes to be rated in the second round with lower average scores

Table 6: Colour coding of the attributes according to their score

 

For the second round:

  • Only the final two types of attributes (orange and white) were included.
  • The groups of attributes with a high consensus (green and pink) in the first round were not included.

In order to aid understanding of the method, the following tables illustrate the results obtained for the first dimension (A1) of area A in section 1, with the same format used here for dimension A1 (Type of Quality System). 

 

Table 7 shows statistical results obtained in the first round for dimension A1: Type of Quality System.

Dimension A. Area A1. Type of Quality System. Statistical results for the 1st round

 

IP1

IP2

IP3

IP4

IP5

IP6

IP7

N

Valid

44,00

44,00

44,00

44,00

44,00

42,00

42,00

Lost

1,00

1,00

1,00

1,00

1,00

3,00

3,00

Mean

6,59

3,71

8,43

4,30

7,71

7,19

6,86

Median

8,00

4,00

9,00

4,00

8,00

8,00

7,00

Mode

8,00

1,00

9,00

3,0a

10,00

10,00

8,00

Minimum

0,00

0,00

5,00

0,00

2,00

2,00

2,00

Maximum

10,00

7,00

10,00

9,00

10,00

10,00

10,00

Percentiles

10

1,50

1,00

6,00

1,50

4,50

4,00

4,00

25

5,00

1,25

8,00

3,00

6,25

5,00

5,00

50

8,00

4,00

9,00

4,00

8,00

8,00

7,00

75

9,00

6,00

9,00

6,00

9,00

9,25

8,00

90

9,50

7,00

10,00

7,00

10,00

10,00

9,70

Table 7: Statistical results obtained in the first round for dimension A1

 

Caption: IPx: Importance of the question or score for the attribute to which the question refers to.

Table 8 summarises the attributes which obtained a high consensus with regard to their importance after the statistical processing of the first round, and which were not rated again, as well as those which were included in the second round. It includes a small explanation on the score which led to the withdrawal or inclusion of the attribute in the questionnaire for the second round.   

IP-3 is not included in the second round due to a consensus on its high importance: 

Mean and median close to or equal to 9. 25th percentile equal to 8.  

IP-5, IP-6 and IP-1 to be rated in the second round with high average scores:

Median of 8, mean greater than 6 and 25th percentile greater than 5.

In accordance with the different parameters the order of importance would be: IP-5; IP-6 and IP-1.

IP-7 to be rated in the second round with low average scores:

Median of 7, mean greater than 6 and 25th percentile equal to 5.

Median of 7, mean greater than 6 and 25th percentile equal to 5.

Median of 4, mean less than 5 and 25th percentile equal to or less than 3.

Table 8: Classification of attributes according to their consensus status and assignment of attributes for the second round.

 

Thus, attributes IP-3, due to its high importance, and IP-2 and IP-4 due to their low importance, were excluded from dimension A1, in the second round. The 4 remaining attributes were included in the questionnaire for the second round.

The results obtained for the A1 attributes in the second round, graded them in the same order as in the first round, as shown below in Table 9:

Dimension A. Area A1. Type of Quality System. Statistical results for the 2nd round

 

I-P1.

I-P5.

I-P6.

I-P7.

N

Valid

42,00

42,00

42,00

41,00

Lost

2,00

2,00

2,00

3,00

Mean

7,60

7,79

7,79

6,98

Median

8,00

8,00

8,00

7,00

Mode

8,00

9,00

8,00

7,00

Minimum

3,00

0,00

0,00

0,00

Maximum

10,00

10,00

10,00

10,00

Percentiles

10

6,00

6,00

5,00

4,00

25

7,00

7,00

7,00

7,00

50

8,00

8,00

8,00

7,00

75

8,00

9,00

9,00

8,00

90

9,00

9,00

10,00

9,00

Table 9: Statistical results, dimension A1, second round.

The final score for each attribute was calculated based on two values:

  • It’s median.
  • The round in which the consensus was reached

Attributes which obtained a consensus in the first round due to the experts having agreed on the high importance of the same (green) cannot have the same score as others which obtained high scores in the second round. Attributes which were unanimously regarded as being of low importance were awarded minimum scores.

In other words, the inclusion of a new or “corrected” scale was proposed in order to boost the weighting of attributes regarded as more important and penalising those labelled as less important.

The correction criterion, or how the median´s value of the weight is adjusted, is established as follows:

The criteria for correcting or altering the scale are applied to the value of the median obtained in the first and second rounds; hence, when a median for the importance of the attribute of between 9 and 10 (green) is registered in the first round, the change of scale increases the value to 10; if the score of 9 to 10 is obtained in the second round, the change of scale registers the score at 8, thereby penalising the attribute due to the fact that an additional round was required to arrive at a consensus.

The same situation applies to attributes classified as low importance in the first round. A score of 5 is changed to 2 using the change of scale and to 1 when lower than 5. A score lower than 6 in the second round also changes to 1.

A score of 8 is set as a turning point: when a median score of 8 is registered in both the first and second rounds, rescaling does not apply and the score remains at 8. In remaining cases and depending on whether we are dealing with the first or second round, scores of 8 or less are subject to a reduction on the change of scale. If a score of 7 is obtained in both rounds, only the first round score is corrected to avoid penalising a score which remains constant in both rounds.

Rescaling penalises the addition of low-scoring attributes to prevent them from reaching a value of significance when adding both scores together.

 

Weighting each of the Areas and Dimensions

In order to conduct the subsequent weighting of the different areas and dimensions in the 2nd round, the panellists were asked to:

  • Allocate 100 points between areas A and B in Section 1 of the questionnaire so that A + B = 100 points.
  • Allocate 100 points among the 4 dimensions comprising area A, so that A1 + A2 + A3 + A4 = 100.
  • Allocate 100 points among the 3 dimensions comprising area B, so that B1 + B2 + B3 = 100.

 

The results obtained were entered in the tables shown below (Tables 10, 11 and 12):

Weighting on 100 points in Areas A and B

 

 

Area  A

Area B

N

Valid

42

42

Lost

2

2

Mean

 

xA

xB

Median

 

 α

β

Mode

 

MoA

MoB

Percentiles

10

 

 

25

 

 

75

 

 

90

 

 

Table 10: Weighting of areas A and B

 

Weighting on 100 points of the Dimensions of Area A: Conceptual framework

 

 

Dimension A1

Dimension A2

Dimension A3

Dimension A4

N

Valid

41

43

43

43

Lost

3

1

1

1

Mean

 

x1

x2

x3

x4

Median

 

A1

A2

A3

A4

Mode

 

MoA1

MoA2

MoA3

MoA4

Percentiles

10

 

 

 

 

25

 

 

 

 

75

 

 

 

 

90

 

 

 

 

Table 11: Weighting dimensión in area A

 

 

Weighting on 100 points of the Dimensions of Area B: Operational Framework

 

 

Dimension B1

Dimension B2

Dimension B3

N

Valid

43

43

43

Lost

1

1

1

Mean

 

y1

y2

y3

Median

 

B1

B2

B3

Mode

 

MoB1

MoB2

MoB3

Percentiles

10

 

 

 

25

 

 

 

75

 

 

 

90

 

 

 

Table 12: Weighting dimensions area B

 

PHASE 3: DEVELOPMENT OF THE SYNTHETIC QUALITY INDICATOR

Up until now, most Healthcare Quality Systems carried out independent assessments of the different quality aspects in a healthcare organisation. The creation of a concise, multidimensional and comprehensive indicator is important for standardising and assessing the quality in health institutions, irrespective of the quality system used by the organisation.

According to the European Commission, synthetic indicators “are based on different sub-models with no common unit of measurement and no obvious means of weighting”. In the eyes of the OECD “they are variables which provide summarised information on specific phenomena and areas, due to the allocation of a supplementary value which increases the significance of the parameter considered individually”.

The three basic attributes the OECD assigns to concise indicators are simplification, quantification and dissemination (OCDE, 1997). The Synthetic Quality Indicator (SQI): 

  • Summarises into a single figure the areas, dimensions and attributes of quality.
  • Quantitatively weights the importance of each of the aspects assessed in relation to one another.
  • Enables transfer and dissemination of information regarding the object of analysis and assessment.

The Synthetic Quality Indicator is not an end in itself but a tool to facilitate the creation of databases to support homogeneous comparisons and analysis of quality level progression by the different healthcare organisations volunteering for self-assessment.

The Synthetic Quality Indicator will provide an instant snap shot picture of the position of a specific institution at any given time, but should not be regarded as a fixed photo for a specific year. It is recommended that comparison periods greater than a single year (5 - 10 at least) are used when assessing a healthcare institution.

 

Index development and validation

Formula for the Synthetic Quality Indicator

The Synthetic Quality Indicator is a linear combination of the 7 dimensions taken into consideration:

SQI = (α1*A1) + (α2*A2) + (α3*A3) + (α4*A4) + (β1*B1) + (β2*B2) + (β3*B3)

The idea was to ensure that the Indicator remained simple to understand, and as such it was designed with a score which ranges from 0 to 100. The problem lays in determining the αi and;βi, coefficients, constants which are not dependent on any centre, and the score for eachAi y Bi, dimension variable in accordance with each centre.

 

Value of the  αi and  βicoefficients

The following factors were taken into account in order to obtain the coefficients:

  • The weightings assigned to each of the Areas A and B by the panellists.
  • The weightings assigned to each dimension of the Areas A and B by the panellists.

 

Values of each dimension (possible ranges)

The initial hypotheses and conditions were:

  • Each attribute included in each dimension may only be given two values: 1 = the content of the attribute is fulfilled in the Quality System; 0 = the content of the attribute is not fulfilled in the Quality System.
  • The value of each attribute cannot be the same in one dimension, but should vary in accordance with the relative importance it represents in that dimension, as agreed upon by the panellists.
  • The maximum value of a dimension, if all the items are complied with, must be equal to its weight or coefficient in the Indicator.

The scores of two attributes in a single dimension can only be the same if they have been assigned the same importance (median).

 

Calculation of the Synthetic Quality Indicator

SQIα1 * A1 +  α2 * A2 +  α3 * A3 +  α4 * A4 + β1 * B1 + β2 * B2 + β3 * B3;

The score for each dimension is obtained by adding the scores obtained by the attributes (referred to below by the generic letter j) of this dimension:

Ai = ∑ Aj*C;  Bi = ∑ Bj*C

where may take two values: 1 if the item is complied with and 0 if it´s not, variable, for each centre and with a dichotomic value.

Ai= The weight of the j attributes in the Ai dimension (the median of the importance given to the j attribute / sum of the medians of the attributes of the Ai dimension). Bi = The weight of the j attributes in the Bi dimension (the median of the importance given to the j attribute / sum of the medians of the attributes of the Bi dimension). This indicator was sent to all the participants in the study and presented to the endorsing group after having been approved by the Management Committee.

 

Dissemination roadmap

Validation of the Synthetic Quality Indicator

The suitability of the Indicator was tested in a simulation in 5 hospitals of different characteristics: 2 benchmark high-level public hospitals, 1 medium-level public hospital, 1 medium-stay public hospital and 1 high-level private hospital. The simulation enabled us to successfully rank the hospitals

 

Distinctive mark of the Healthcare Quality System

The IDIS has created a distinctive mark of quality associated with the Synthetic Quality Indicator, to be awarded in recognition of the degree of quality attained by a specific health institution in accordance with their scores.

 

Conclusions and Recommendations

  • Assigning a score to a specific Quality System is complex. This is not a mere academic or technical activity, but one which responds to a series of specific purposes (Molas Gallart and Castro Martínez, 2007). This is most likely the reason why no consensus has been reached in relation to the creation of a Synthetic Quality Indicator in the past.
  • This study has succeeded in producing a Synthetic Quality Indicator which allows aggregation in a single model of each of the main areas and dimensions of quality, as a result of the following factors:
  • Clear objectives, from the outset the idea was to devise a tool which would enable the assessment, in homogeneous terms, of the quality of public and private institutions from the National Health System; and the recognition of excellence and sustained efforts for improvement.
  • Changing the initial approach, by refocusing the study on the areas and dimensions of quality and not on the score of a specific Quality System, this might generate bias, for example, in the case of a panellist who has a greater knowledge of or is more familiar with the use of a certain Quality System.
  • The positive participation in the study of the Scientific Societies associated with quality, involved from the initial planning and selection of panellists to the monitoring and assessment of results.
  • Selection of participating panellists among professionals from the different Autonomous Communities with demonstrable subject knowledge, expertise and renowned prestige in the field of healthcare quality management.
  • Validity of the questionnaire, upheld by the use of the main dimensions of quality and the enriching input of the project management team, the endorsing group and the participating panellists.
  • The high level of commitment of the participants in the study, ensuring a high degree of consensus and a significant response rate, which allowed reducing the time of the study to two rounds and to obtain a robust Synthetic Quality Indicator.
  • Carrying out a simulation, whereby several different hospitals voluntarily and anonymously implemented the Synthetic Quality Indicator, submitting the responses obtained and enabling a comparison.
  • The Synthetic Quality Indicator was produced with scientific rigour, using the Delphi method based on qualitative analysis (expert opinions) and quantitative processing of the responses (statistical treatment).
  • The Synthetic Quality Indicator has been designed to reward institutions which strive to implement a continual and progressive Quality System , so it can be assessed by an independent agency and awarded external recognition, certification and accreditation.
  • The Synthetic Quality Indicator is a “live” indicator, given the constant emergence of new Quality Systems in the services area, and, more specifically, in the health sector. In this context, it is important to count on a Synthetic Quality Indicator which allows periodic updates.
  • The creation by IDIS of a distinctive mark associated with the Synthetic Quality Indicator score obtained by an institution provides external recognition, attributing visibility to a health organisation by illustrating its quality performance. This is an attractive proposition in a globalised society such as ours and in the European context, where the free circulation of patients and healthcare professionals presents an opportunity for the Spanish health sector, both from a social point of view as from a health-related, technological and economic perspective.

This IDIS project provides any healthcare institution, whether public or private, working in the primary, specialised or social sectors, with a free self-assessment tool, in the form of the Synthetic Quality Indicator, which will allow comparison in a homogeneous and anonymous manner, with peers or with itself at a different time point, preserving confidentiality. The tool´s greatest strength is that it brings together all the existing quality systems in our environment, exploiting the synergies found among them.