Public Sector Economics

4439
Views



3450
Downloads

Efficiency vs effectiveness: an analysis of tertiary education across Europe



Ozana Nadoveza Jelić*
   
Margareta Gardijan Kedžo*
Article   |   Year:  2018   |   Pages:  381 - 414   |   Volume:  42   |   Issue:  4
Received:  June 1, 2018   |   Accepted:  October 21, 2018   |   Published online:  December 14, 2018
Download citation        https://doi.org/10.3326/pse.42.4.2       


 

Abstract


This paper deals with tertiary education efficiency and effectiveness across 24 European Union countries in four sub-periods between 2004 and 2015. The efficiency scores are computed using Data Envelopment Analysis (DEA). We try to raise awareness of the quality, and not of the quantity, of educational outputs and inputs by introducing quality-based correction of the DEA efficiency score, which we regard as effectiveness. Our results show that quality considerations affect the relative positions of countries regarding their efficiency scores. In other words, some less developed countries, which are efficient in the quantity-based model, fail to reach the defined efficiency border when considering some quality indicators of outputs. On the other hand, some inefficient developed countries increase their DEA-based ranking and achieve effectiveness (quality-based efficiency). The same is true for input quality considerations. Since tertiary education cannot be expected to provide the same quality of outcomes with different input qualities, efficiency improves (deteriorates) in the input-output quality-based model in many countries with low (high) quality student bases.

Keywords:  tertiary education; data envelopment analysis; educational efficiency and effectiveness; EU

JEL:  I21, I22, I23


1 Introduction


It is a well-established fact that the quality of education matters more than quantity. Fortunato and Panizza (2015) argue that the sharp increase in cross-country average years of schooling might not accurately represent actual educational gains. According to Pritchett (2013), as cited in Fortunato and Panizza (2015), an increase in years of education in less developed countries, as opposed to developed countries, is not always transmitted into educational benefits. This view is also supported by many relatively recent papers such as Hanushek and Kimko (2000), Barro (2001), Wößmann (2006), Altinok, Diebolt and Demeulemeester (2014), Barro (2013) whereas Barro (2013) concludes that the “quality and quantity of schooling both matter for growth but quality is much more important”. Additionally, Pritchett (2001), who was not able to prove a positive association between increasing educational attainment and per capita income growth, argues that it could be that the educational quality was so low that “years of schooling” have created no human capital.

Due to the importance of educational services for growth, attitudes and political and social awareness, they are provided and publicly financed, to a greater or lesser extent, by practically all governments around the world. Additionally, educational externalities are a textbook example of market failure and one of the most important motives behind government intervention in this sector. According to Szirmai (2015), after World War II, expansion and improvement of education were generally considered essential to development. The awareness about the role of education in the development process resulted in a far reaching education expansion. Over the course of time, increased government expenditures on education translated into higher levels of education. Consequently, higher education enrolments have grown significantly over the last three decades. According to World Bank (2018) data, the world gross enrolment ratio in tertiary education1 grew from 13% to 35% during the 1985-2015 period. Growth has been even more impressive in the European Union (EU) where the average annual growth rate of the gross enrolment ratio in tertiary education reached 3.5%. This has led to an increase in the gross enrolment ratio in tertiary education from 25% in 1985 to 68% in 2014.

However, as Szirmai (2015) puts it “Since the 1970s, optimism about the contributions of education has been shaken and more emphasis is given to improving the quality of education.” This author notices that not all educational investments are effective and efficient in the development process. Due to the potential ineffectiveness of educational inputs, the quality of education can be unsatisfactory. Thus, the rising educational coverage and duration of education, as well as government and even private educational expenditures, are not always efficiently transmitted into higher productivity and wages, growth rates and better institutions. Therefore, it can be argued, it is not quantity that underlies the successful exploitation of all forms of educational benefits, but the quality and the effectiveness of the educational inputs and investments. Although efficiency and effectiveness are similar concepts, they are not synonyms. Viljoen (cited in Kenny, 2008) defined efficiency as relating to “how well an activity or operation is performed.” The term effectiveness relates to performing the correct activity or operation. In other words, “efficiency measures how well an organization does what it does, but effectiveness raises value questions about what the organization should be doing in the first place”.

There is a significant body of literature which deals with the efficiency of all levels of the national educational systems in the EU. Many of those studies chose to use data envelopment analysis (DEA) in their methodological approach, because DEA, as a nonparametric method of mathematical programming, enables the calculation of the relative efficiency of quite homogenous and comparable units given multiple criteria. These criteria dictate the choice of certain input variables, whose values are preferred to be as small as possible, and certain output variables, whose values are preferred to be as great as possible. The choice of the criteria, and consequently the choice of the variables, defines the concept of the research.

Conclusions of various DEA-based studies sometimes differ significantly, which makes it impossible to draw general conclusions concerning tertiary educational efficiency at the EU level. Differences in conclusions mostly arise from the diverse selection of inputs and outputs considered within different studies. Additionally, some papers deal with a narrow sample of countries (e.g. Ahec Šonje, DeskarŠkrbić and Šonje, 2018; Yotova and Stefanova, 2017), i.e. homogenous countries with similar development levels, and others deal with a broader and more or less heterogeneous set of countries, which can also affect the difference in the results (Aubyn et al., 2009; Aristovnik and Obadić, 2011; and Toth, 2009).

Still, most of the papers that use the DEA approach make comparisons on tertiary education between countries considering only the definition of efficiency. Some papers deal with quality issues but mostly on the output side of the educational “production function”. Therefore, questions regarding the quality of educational inputs and outputs and their effectiveness are usually covered only partially. In this paper, we argue that a greater focus on efficiency can give misleading results that could translate into flawed educational policy prescriptions.

The paper is organized as follows. The second section summarizes previous research findings. The third section gives the rationale for selected inputs and outputs as well as a glimpse of the educational inputs and outputs in the EU. The fourth section deals with the methodology and the fifth presents and discusses the main results. The last part of the paper provides comments on policy implications and future research recommendations.



2 Literature overview


DEA is a generally suitable method for a country-level public sector efficiency evaluation2 and it is commonly used and widely accepted as an appropriate analysis approach in the tertiary education efficiency research. For example, to rank eleven Eastern European countries according to their tertiary education efficiency during the 2005-2013 period, Ahec Šonje, Deskar-Škrbić and Šonje (2018) use input-oriented DEA with variable returns to scale (VRS). The authors use expenditure on tertiary education per pupil in the percentage of GDP per capita as an input variable and the share of unemployed with tertiary education in the total number of unemployed (model 1) and World University Ranking list as an alternative output measure (model 2). However, the authors consider models with only one input and one output variable, which limits the possibility of making more general conclusions.

Yotova and Stefanova (2017) used the same method on a set of countries similar to that chosen by Ahec Šonje, Deskar-Škrbić and Šonje (2018). As an input variable, authors used total expenditures on tertiary education per student as a percentage of per capita GDP in 2012, while the set of educational outputs variables included three indicators: tertiary educational attainment (age 25-34), the employment rate of the population with tertiary education outside the risk of poverty and social exclusion and the mean monthly earnings of a person with tertiary education as a share in per capita GDP in 2014. Again, the analysis is limited to one input and one output. It should be noted that both studies include some educational output quality indicators, but they do not consider any educational input quality measures, which could lead to biased results and conclusions.

Toth (2009) analyzed the efficiency of tertiary education in 20 EU countries in 2006 using output-oriented DEA with variable returns to scale (VRS). The author used a ratio of expenditures spent on higher education to GDP as an educational input, and the ratio of people with a degree to the total population as well as the employment rate of people with a degree as educational output variables. Beside standard outputs and inputs, the author used two non-discretionary variables (parental educational attainment and public-to-total expenditure GDP per capita in current US$). However, Toth’s (2009) results differ significantly from other, related, studies that include EU countries3. She found that, for example, Denmark and Italy (among others) share the first position regarding tertiary education efficiency in 20 analyzed EU countries, while Aristovnik and Obadić (2011) and Aubyn et al. (2009) rank these countries as relatively inefficient.

Aristovnik and Obadić (2011) used output oriented DEA with variable returns to scale (VRS) to assess tertiary education efficiency in a broad set of countries (selected group of EU and OECD countries) during the 1999-2007 period. The analysis included input data on expenditure per student (tertiary, % of GDP per capita), school enrolment (tertiary, % gross), and output/outcome data, i.e. school enrolment (tertiary, % gross), labor force with a tertiary education (% of total) and the unemployed with a tertiary education (% of total unemployment). To assess technical efficiency regarding different inputs and outputs/outcome, the authors tested three. Two out of three considered outputs are standard educational quantity output indicators, while the last can be regarded as a quality indicator. In the conclusion authors emphasize the need to consider some educational quality data.
 
The most comprehensive study employing DEA methodology to assess the efficiency of the tertiary education in a broad set of countries is authored by Aubyn et al. (2009). The authors used two approaches: input and output-oriented DEA with variable returns to scale (VRS). The analysis is conducted over two subperiods: 1998-2001 and 2002-2005. In the first model, authors used a number of academic staff and students as inputs, while the second model considered spending in private government-dependent institutions (in % of GDP) as an input variable. A weighted number of graduates and a weighted number of published articles were used as output variables in both models. All educational inputs and outputs considered in this paper can be regarded as quantitative. However, the study includes a number of non-discretionary measures such as selection of students, budget autonomy, staff policy, output flexibility, evaluation, funding rules and PISA results4, which can be seen as qualitative measures (mostly) of inputs. 

It should be noted that conclusions differ in the abovementioned papers, which makes it impossible for us to draw any general conclusions on tertiary educational efficiency at EU level5. We suspect that differences in conclusions mostly arise from the diverse selection of inputs and outputs considered within different papers. However, the differences in the conclusions of the reviewed papers also arise because of the different samples of countries. That is, two papers deal with a narrow sample of countries, i.e. homogenous countries with similar development levels, and others deal with a broader and heterogeneous set of countries, which can also produce different results. Still, differences arise even if the samples are relatively similar. For example, Aristovnik and Obadić (2011), and Aubyn et al. (2009) use the same number and coverage of countries and even time periods in different model specifications (variables), but sometimes the results differ significantly. For example, the first model in Aristovnik and Obadić (2011) ranks the Czech Republic as the first and then as the 33rd in the second model. Similarly, in Aubyn et al. (2009) Cyprus is ranked number one in the first model (1998-2001) and then as 27th in the second model (1998-2001)6.



3 Data: tertiary education inputs and outputs


This paper differentiates between quantity and quality measures of educational inputs and outputs, which enables us to discriminate tertiary education efficiency and tertiary education effectiveness. Since there is no consensus regarding the appropriateness of available inputs and outputs, it seemed inappropriate to make an ad hoc decision to include some and to exclude other inputs and outputs that were used in the previous researches. Therefore, this paper uses a somewhat broader set of inputs and outputs than most of the papers presented in the literature overview. It also considers quality indicators on both side of the educational production function – the input and the output side. This decision comes with a cost, as the discriminatory power of the method becomes questionable with the increase of the variables due to the inappropriate degrees of freedom (Cooper, Seiford, Tone, 2006:106). However, any future research should try to detect key inputs and outputs in the tertiary education “production” process and try to synthesize them to get more information with fewer data/variables. This approach could lead to more robust and more consistent DEA-based conclusions regarding tertiary education
efficiency.

To our knowledge, there is no precise definition and delimitation of quantitative and qualitative educational inputs and outputs. According to Lee in Bourguignon, Elkana and Pleskovic (2007), an outcome of education is composed of both the quantity and the quality of educational capital. According to him, the quantity of educational capital can be measured by the number of graduates. However, he emphasizes that it is rather difficult to measure the quality of education accurately. The author adds that the quality of education is reflected in the performance of students and graduates, as the value added of schooling can be measured by labor market performance, such as extra earnings or employment, of educated workers. Due to the lack of official quantity vs quality definitions regarding educational inputs and outputs, in this section, we provide the basic rationale behind the choices made in this paper.

Before the provision of details regarding the selected inputs and outputs, figure 1 gives a synthetic overview of educational inputs and outcomes, as defined in Scheerens, Luyten and van Ravens (2011).

Figure 1
A synthetic overview of educational inputs, processes and outcomes
DISPLAY Figure

The selection of quality and quantity educational input and output indicators was mostly dictated by data availability (on the system level). Additionally, some indicators that were considered as either inputs or outputs of the tertiary education system were highly correlated with other selected variables. Thus, we had to drop some of them. The following subsections link selected variables to the definitions of input, output and process indicators shown in figure 1. System-level process indicators have not been considered at all due to the lack of appropriate data.

3.1 Quantitative measures of educational inputs


General government expenditures on tertiary education as a percentage of GDP (financial resources indicator) are chosen as the most common measure of tertiary education public investments/expenditures. Due to the correlation of this measure with similar measures of inputs, other measures are excluded. Data for this measure are available for the entire analyzed period.

Financial aid to students as a percentage of total public expenditure on education, at the tertiary level of education (financial resources indicator) is selected as an input since it indicates public expenditures pointed directly towards students. It is assumed that it adds new information regarding tertiary education financial inputs since it is not correlated with the previous financial resources indicator. Data for this measure are available for the 2004-2012 period.

One limitation should be noted here. Namely, both financial resources indicators contain only public spending on tertiary education. However, the structure of financing sources could also affect the efficiency since publicly financed education resources (see system level financial inputs and process indicators in figure 1) do not represent the total amount of educational spending. However, comparable dana on private spending on education for all countries in our sample was not available. 

The ratio of pupils and students to teachers and academic staff in tertiary education is selected as a human resource indicator in the last analyzed sub-period (2013-2015), which was dictated by data availability.

3.2 Qualitative measure of educational inputs


The percentage of underachieving 15-year-old students in the PISA survey (average of all fields) is an output indicator of secondary education. We assume it is a contextual indicator that measures human capital input quality at the tertiary level education since it contains information about the quality of the student population before entering the system of tertiary education. Data for this measure are available for the entire analyzed period.


3.3 Quantitative measures of educational outputs


Tertiary education graduates (ISCED 5-6, per 1,000 of population aged 20-29) and graduates aged 20-34 (% of the corresponding population) are selected as outcome/attainment indicators that are the most important and commonly used measures of tertiary education outputs. The first indicator is available for the 2004-2012 period, while the latter was used for the analysis in the last sub-period (2013-2015). Since both measures indicate only the number of students who successfully exit the tertiary education system and do not contain any information regarding their “quality”, we regard them as quantitative indicators of educational outputs.

The population aged 15-64 with completed tertiary education is selected as a common quantitative output indicator since it only considers the number of tertiary educated people and provides no information regarding the qualitative features of the tertiary educated population. It should be noted that population with completed tertiary education also reflects past spending on education, while our analysis measures the outputs at the same time as inputs. However, if we considered only past spending on tertiary education we would still have a similar problem. Beside historical data availability problems, if we took (financial) inputs from previous periods, we would neglect the potential efficiency of current expenditures to “produce” a new tertiary educated population. This is because current financial resources devoted to tertiary education are spread across current students. In three-year periods (for which we take averages) some of those students become part of the tertiary educated population. Data for this measure are available for the entire analyzed period.

The ratio of unemployment rates (%, age 15-64) for all educational levels to unemployment rates (%, age 15-64) of the tertiary educated labor force is selected as an impact indicator of tertiary education outcomes. It measures tertiary education returns on the labor market. Due to its correlation with similar labor market outcomes measures, other measures are excluded. Data for this measure are available for the entire analyzed period. Even if this indicator could be seen as a qualitative tertiary education outcome measure, we included it in both the efficiency and the effectiveness analysis. We argue that a high ratio of unemployment rates for all educational levels and unemployment rates of tertiary educated labor force does not necessarily reflect the high efficiency of the tertiary education in terms of labor market outcomes, but could be also a result of low activity rates of the tertiary educated population. Therefore, we correct this measure with activity rates of tertiary educated population.

3.4 Qualitative measure of educational outputs


Following the preceding paragraph, the ratio of unemployment rates (%, age 15-64) for all educational levels to unemployment rates (%, age 15-64) of the tertiary educated labor force is multiplied by the activity rates of tertiary educated population. The resulting measure is selected as a qualitative impact indicator of the tertiary education outcomes. Data for this measure are available for the entire analyzed period.

An average overall score of Times Higher Education university rankings is chosen as an output indicator of the tertiary education quality in the last sub-period (2013-2015). We considered other ranking lists, but Times Higher Education was the only university rankings database which covered all countries in our sample in 2016. In previous sub-periods (2004-2012), we used the gross domestic product in PPS per capita (% of average) as a proxy for tertiary education outputs quality due to the incompleteness of the university rankings data and their correlation with university rankings (overall score). Anecdotal evidence presented in figure 2 justifies this choice. Namely, it seems that the correlation between the GDP per capita and the average university overall score (measure of the educational outcomes quality) using the ranking of the Times Higher Education (2017), significantly exceeds the correlation between the GDP per capita and the tertiary educated population as a percentage of 15-64 years aged population (typical measure of educational outcomes quantity).

Figure 2
Quantity versus quality of education as GDP per capita correlates
DISPLAY Figure

The analysis is performed on a sample of 24 EU countries7 for which all the necessary data during the 2004-2015 period were available. The entire time span has been divided into four 3-years sub-periods for which comparable data and variables were available. Table 1 summarizes selected inputs and outputs in efficiency and effectiveness DEA models.

Figures 3, 4 and 5 show educational (quantity and quality) inputs and outputs trends within the EU countries during the analyzed periods (averages for subperiods 2004-2006, 2007-2009, 2010-2012, 2013-2015). The figures reveal a lot of differences among EU member states regarding the educational inputs and outputs. However, a few conclusions can be drawn.

Table 1
Inputs, outputs and quality indicators
DISPLAY Table

Inputs – The more developed EU countries generally have greater direct investment in students (in %) (figure 3a). Something similar is true for general government expenditure (figure 3b). However, there are a few exceptions, like the UK on the low expenditures side and Poland, Estonia and Lithuania on the high expenditures side (figure 3c). Student to teacher ratio varies from 10.7 in Sweden to 22.5 in the Czech Republic.

Outputs – Graduation rates (figure 4a) have been increasing in all countries within the period of analysis, whereas a few post-transition economies, which have relatively low incomes, have relatively high graduation rates. Regarding the labor market outcomes (figure 4c), the tertiary educated labor force seems to have a somewhat lower unemployment rate relative to the overall unemployment rate in less developed EU countries. This could be due to the relative scarcity of tertiary educated labor in lower income countries, which provides them with a better labor market position (figure 4b).

Quality indicators of inputs and outputs – After correcting the above described labor market outcomes for the tertiary educated activity rates, some countries, like the Netherlands, Sweden, Germany, Ireland and Austria, improve their relative position, while the positions of Croatia, Slovakia and Romania positions deteriorate (figure 5a). The correlation between per capita GDP and university ranking overall score has already been commented on. As we have already emphasized, both of those outputs measure the quality of the tertiary education. Finally, figure 5c shows that the percentage of underachieving 15-year-old students (measured as the average of all fields in a PISA survey) is usually much larger in the poorest EU countries, while it is the lowest in the wealthiest ones (with a few exceptions). This means that poorer countries get students of “lower quality”.

Figure 3
Tertiary education inputs (averages 2004-2006, 2007-2009, 2010-2012, 2013-2015)
DISPLAY Figure
Figure 4
Tertiary education outputs (averages 2004-2006, 2007-2009, 2010-2012, 2013-2015)
DISPLAY Figure
Figure 5
Tertiary education quality indicators (averages 2004-2006, 2007-2009, 2010-2012, 2013-2015)
DISPLAY Figure



4 Methodology


The efficiency and (what we later regard as) the effectiveness analysis of the tertiary education in 24 EU member states8 is conducted using data envelopment analysis (DEA). DEA is a nonparametric method of mathematical programming, which is developed for evaluating the relative efficiency of units under assessment, usually called the decision-making units (DMUs). Since its introduction by the pioneering CCR model in 1978 (Charnes, Cooper and Rhodes, 1987), followed by the BCC model published by Banker, Charnes and Cooper in 1984, DEA has instantly been recognized as a modern tool for performance management. While the CCR model assumes constant returns to scale (CRS), the BCC model assumes variable returns to scale, which allows the use of DEA in problems where increases in inputs result in non-proportionate increases in outputs (and vice versa). The most appealing features of DEA are that it allows multiple criteria for determining efficiency to be used and appropriate variables to be selected, which are (in most models) unit-invariant, without the use of their pre-defined weights. In addition, all assessments are relative given the finite number of comparable DMUs. Following the specific needs of the research environment, a vast number of models have been developed within DEA to fit and capture the nature of the research problem, thus providing a great tool for different kinds of efficiency analysis. Additionally, the popularity of DEA and the number of its applications are on the rise (Emrouznejad and Yang, 2018).

DEA was initially developed with the idea of measuring the efficiency of production units, such as factories, hospitals or banks, where one can unswervingly determine their inputs and their outputs. Such DMUs can manage their inputs and outputs to a certain degree (thus the name decision-making units). An additional assumption is that the aim of DMUs is to use their available inputs to achieve greater outputs or try to use fewer inputs for producing the desired level of output. In other words, they are assumed to aim for the efficiency in a production process. However, the application of DEA has spread outside the production processes and researchers are using it for evaluating the relative efficiency of different kinds of (relatively homogenous) units that need to be estimated given their undesirable (input) and desirable (output) characteristics. The examples are the portfolio selection, the performance of companies using their financial ratio data, performance of countries according to their macroeconomic indicators or different “processes”, for example, fiscal policy or educational policy. As is obvious, such DMUs are not the “decision-making” units themselves and not all of them should aim for efficiency in terms of fewer inputs to greater outputs. Moreover, the selection of their inputs and outputs is arbitrary, but this allows a researcher to define the relevant aspects of the “efficiency” of DMUs.

The use of DEA for estimating the relative efficiency of education at different levels (primary, secondary, tertiary) has been very popular over recent years. The overview of some of these researches, previously mentioned in the literature overview, revealed that the most frequently used model is the BCC model (with input or output orientation), which is an appropriate approach given the nature of this research problem. Without questioning the great contribution and effort of past researches, what we argue is that their selection of inputs and outputs gives more importance to the greater quantity of the educational output. We strongly suggest that education should be assessed not only in terms of quantity but also in terms of quality. Figuratively speaking, a factory that manages to produce something using almost nothing should be seen as a role model, and a factory that invests a lot relative to others and achieves less than the others should be recognized as poorly managed. However, countries that have large investments in education should not be punished in such studies if they manage to provide a high quality of education. Likewise, the countries that have almost negligible inputs should not be rewarded just because they managed “to produce” any amount of outputs of low quality despite their low inputs. Therefore, we suggest that at the beginning of the study using DEA, the crucial question should be asked: “Are we really aiming at the quantity or the quality?” and the answer should be followed with the selection of the inputs and the outputs that are relevant for the study. 

In addition, just as the output of the production facility is determined with the quality of the inputs, which cannot be always controlled, certain levels of the educational process are determined by the outputs of the preceding processes. Figuratively, one cannot make a tasty cake using salt instead of sugar. For this problem, DEA allows the definition of non-discretionary inputs, which are relevant but they are not controllable and are defined by the environment (Banker and Morey, 1986). This approach was used in some previous studies of education using DEA. However, as we will explain in the following paragraphs, we will treat the non-discretionary variables as discretionary to provide results that are more informative. 

DEA models can be output oriented, aiming at maximization of outputs for the given level of inputs, input-oriented, aiming at minimization of inputs for the given level of outputs, or non-oriented. Also, the models can assume constant, variable or generalized returns to scale. Following the nature of the problem we are analyzing, we decide to use the output-oriented model assuming variable returns to scale (BCC model). To explain the methodology, we first formulate the model. Let there be N decision-making units (DMUs): DMU1, DMU2,…, DMUN which are homogenous and comparative. We assume that their efficiency should be estimated in terms of a certain number of inputs – the variables the values of which we want to be as small as possible, and a certain number of outputs – variables the values of which we prefer to be as big as possible. Let xij ≥ 0 be an i-th input for some DMUj,∈ {1,…, m} and yrj > 0 its r-th output, ∈ {1,…, s}, ∈ {1,…, N}. Therefore, each DMUj is represented by a vector of inputs xj = (x1j, x2j,..., xmj) and a vector of outputs yi = (y1j, y2j,..., ysj), so X = [xij] ∈ RmxN is an input matrix and Y = [yrj] ∈ RsxN is an output matrix. To make the model stable, it is recommended that the number of DMUs (N) should not exceed max {ms,3(m+s)}. The BCC model (Banker, Charnes and Cooper, 1984) can be written in the following envelopment form:

 (1)


(2)


(3)


                (4)


 Xij, yrj, λj, sr, si, ≥ 0, ∀i, j, rq free in sign,

where ε > 0 and sr+ si- and are slack variables. If we denote the optimal solution as (q0*,  λ0*, s0+*s0-*), a DMU0 is efficient if and only if the efficiency score qo = 1 and all sio+* = so-* = 0. DMU0 is weakly efficient if and only if q0 1 but sio+* 0 or sio+* ≠ 0 for some i and r in some alternate optima (Cooper, Seiford and Zhu, 2011). Otherwise, a DMU is inefficient. Resulting from the optimal solution of the program (1) – (4), an inefficient DMU (xo, yo) can be projected to the BCC efficiency frontier as a combination of other DMU using the formulas:  and   (Cooper, Seiford and Zhu, 2011). Therefore, the lambdas allow us to identify the peer group of an inefficient DMU. By observing these efficient projections, we can analyze how a DMU should increase its outputs and/ or decrease its inputs to become relatively efficient.9

The period of analysis is divided into four subperiods: 2004-2006, 2007-2009, 2010-2012 and 2013-2015. The selection of the periods is mostly dictated by the availability of the data and the change in the data methodology. As explained in table 1, subperiods within 2004-2012 and subperiod 2013-2015 are characterized by different variables due to the availability of the data. Therefore, a direct comparison of results between periods is not advisable.

To circumvent the problem of missing data, we decided to calculate the simple three – years averages of data as the closest representative of the period. However, even this procedure resulted in some countries having missing data, so our approach was to exclude countries that had more than one missing data item. In order to keep as many countries as possible in the sample, those countries that had only one missing data item were kept in the sample and missing inputs/outputs were assigned a pessimistic value which is large/small enough for an objective function not to be entered, as proposed by Kuosmanen (2009). We did this only for countries that had one missing data item because we did not want to affect the “technology set” and worsen the relative ranking of other DMUs that had complete data. Additionally, we checked that the objective function in the solution included a multiplier of 0 for inputs/outputs variables with an arbitrary set value.

After the correction of the sample, the analysis includes 24 EU countries: Belgium, Bulgaria, the Czech Republic, Denmark, Germany, Estonia, Ireland, Spain, France, Croatia, Italy, Latvia, Lithuania, Hungary, the Netherlands, Austria, Poland, Portugal, Romania, Slovenia, Slovakia, Finland, Sweden, and the United Kingdom.

In the first step, we run the quantity-based models using variables expenditures (I) EX2 and financial aid (I)FA(%EX) as inputs and as outputs we use the percentage of graduates (O)GRAD(20-29), the education returns on labor market (O)U/UT and the percentage of highly educated population (O)POPT for the period of 2004-2012. We performed a similar analysis for the period 2013-2015, except that variable (I)FA(%EX) is replaced by the ratio of students per teacher (I)(S/T) and variable (O)GRAD(20-29) with (O)GRAD(20-34). As is obvious, such a selection of variables led to rewarding the quantity of the educational output and reporting on the efficiency of the tertiary education.

The second step was to include quality corrections for the previously obtained efficiency analysis. Firstly, we take account of output-quality and then we introduce the input-quality correction as well. For the output-quality control we replace the output variable (O)U/UT by the quality-corrected variable (O)U/UT*ACT ((O)U/UT multiplied with activity rates of the tertiary educated population). Also, variable (O)POPT was substituted for by (O)GDPpc in 2004-2012, and by (O)UR university ranking in 2013-2015 (as (O)GDPpc and (O)UR showed to be highly positively correlated). Afterward, the input-quality control was introduced by including PISA results in the analysis. Altogether we estimated 6 different models using inputs and outputs in certain subperiods as presented in table 2.

Table 2
Variables used in each DEA model, by period
DISPLAY Table



5 Results: analysis of the efficiency and effectiveness of tertiary education in the EU


Figures 6a-6c present our results for the period of 2004-2012 whereas figure 6d shows the results for the last period of 2013-2015 which is analyzed using different variables. Therefore, we do not make ready comparisons between them. However, the results from the period of 2013-2015 mostly support our conclusions, and what we also conclude is that the choice of the variables for this period is rather robust and findings can be drawn that are similar to those from the period of 2007-2012.

The tables with exact DEA scores for the analyzed period are given in table A5 in appendix, and here we present the rankings resulting from these scores. The dark bars in figures 6a-6d indicate the rankings of the countries calculated by the quantity model. For the sake of clarity, we present the higher ranking with a higher bar. In addition, we rank all efficient units as 24th and a unit with the highest inefficient score as the 23rd (or the second best), etc. By generally observing the results, we see that approximately a similar number of countries (9 to 14) remains efficient throughout the years within each model. The relatively large number of efficient countries within each period is the result of the total number of input and output variables: decreasing the number of inputs and outputs would decrease the number of efficient countries. However, we aimed to include most of the variables that were used in the previous studies and this comes at a cost. Quantity-based efficiency results show that some of the most developed countries in the sample, like Austria and the Netherlands, are not efficient while some less developed countries like Hungary, Estonia and Bulgaria define the efficient frontier in some periods. The change of ranking reported by the output-quality model is shown with a striped bar. When output-quality control is included, most of the efficient countries retain their position, but a significant number of them decrease in rank and the rank of some of rises. Overall, the number of efficient countries decreases, and the overall average efficiency score decreases.

Afterward, we take account of the quality of the inputs. In the input-output quality model, we add PISA as an input. In this way, if underachieving PISA results are relatively low, it will increase the efficiency score. If the opposite, PISA will decrease the score. In figures 6a-6d, we use a dark black bar to indicate the difference between rank in output-quality and input-output-quality model. If the difference is positive, it means that countries’ tertiary education produces relatively higher quality outputs given the relatively low quality of students (inputs) measured by PISA results. If the difference is negative, the opposite is true. In this way, we get an insight into how the quality of the students, measured by PISA, can influence educational efficiency.

When we consider educational output quality in our model, it becomes obvious that countries which were inefficient in the quantity-based model, and which are usually perceived as countries with solid educational systems, improve their rank significantly. Namely, output-quality based efficiency results in almost all analyzed periods (figures 6a-6d) show that Austria and the Netherlands reach the efficient frontier. Austria and Netherlands are the most obvious examples, but the same is true for Germany (2007-2009), Denmark (2007-2009, 2010-2012), Sweden (2007-2009, 2010-2012) and Belgium (2013-2015), which also experience efficiency gains in output-quality model. On the other hand, less developed countries (like Hungary, Estonia and Bulgaria) lose their efficiency in all periods in the quality-based model in comparison to the quantity model.

The correction for the input-quality generally shows that, at a given level of PISA results, for many countries, the tertiary education efficiency ranking should actually be increased. This is noticeable for Austria, Italy, France and the Netherlands within developed countries, and in Bulgaria (all periods), Croatia and Hungary (slight increase in all periods except 2007-2009) within the group of the less developed countries.

For example, during the period of 2007-2012, Croatia’s relative position is slightly degraded when an output-quality control is introduced. Therefore, when considering the relatively poor quality of students in Croatia, tertiary education effectiveness is greater than the output-quality model results imply. Generally, Croatia has one of the lowest indicators of (O)U/UT*ACT and (O)GDPpc but, according to our results, it is not the worst ranked country in the EU concerning tertiary education efficiency and effectiveness. By observing the reference set of efficient countries for Croatia (identified by λ*>0 from the model (1)-(4), results shown in tables A2-A4 in appendix) for the period 2004-2012, the BCC model projects Croatia using the input/output vectors of the efficient Czech Republic (among others). For the purpose of comparison, the Czech Republic has lower inputs in expenditures and PISA, and all outputs greater than Croatia.

Poland and Estonia are less developed countries that could achieve greater tertiary education effectiveness given the relatively high-quality students. The same can be concluded for Finland, a developed country that ineffectively uses its highquality students.

Figure 6
Results of the DEA analysis
DISPLAY Figure

The question is what could a country do to be relatively better in the area of educational quality in the future and what its closest quality-led efficient role models should be. The optimal results of the BCC model provide the values of the slack variables for inefficient countries. The slacks indicate the shortfalls in the data of a certain country and possible suggestions for future improvements in the quality aspect. However, the findings are related to a certain country and the analysis is beyond the scope of this paper. Interested readers can find the results in appendix (figure A1), where the figures indicate the greatest shortfalls in the % of the original data for each country.

We chose not to analyze the scale of suggested corrections for each country within each model, but we give some general observations and comments on the individual results: (1) periods of 2007-2009 and 2010-2012 show rather similar patterns, where output quality corrections are noticeable for Bulgaria, Estonia and Denmark; (2) in the period of 2007-2015 Austria and the Netherlands improve their rating after both output and input-output quality corrections; (3) Poland, and especially Finland and Estonia, are the only countries able to utilize their highquality students (measured by PISA results) more effectively (in terms of educational outputs/outcomes quality). Finally, the overall best-ranked countries after both input and output quality control for the whole period of 2004-2015 are the UK, Slovakia, Italy, France, Lithuania, Ireland and Finland.



6 Policy implications and future research recommendations


This paper has dealt with tertiary education efficiency and effectiveness in the EU. It is a well-established fact that the quality of education matters more than the quantity. Still, most of the papers which use the DEA approach make tertiary education comparisons between countries considering only the definition of efficiency. Some papers deal with quality issues but mostly on the output side of the educational “production function”. Therefore, the questions regarding the quality of educational inputs and outputs and the effectiveness are usually covered only partially. In this paper, we argue that a greater focus on efficiency can give misleading results which could translate into flawed educational policy prescriptions.

We performed DEA over available educational inputs and outputs during four nonoverlapping periods from 2004 to 2015 in 24 EU countries. DEA allowed us to rank countries regarding their tertiary education efficiency/effectiveness in achieving favorable educational and labor market outcomes. However, we argued that DEA results should be interpreted with a great deal of caution and should not serve as important educational policy and strategy inputs due to the lack of the quality of educational inputs and outputs considerations, as well as the decreasing returns on higher education in countries with broad coverage of the population by tertiary education. To avoid a potential bias towards the low input units within the DEA, educational inputs and outputs were adjusted for the quality of education indicators. Specifically, we differentiated the quantity and quality measures of educational inputs and outputs, which enabled us to distinguish tertiary education efficiency from tertiary education effectiveness, since the latter seems to matter more for growth.

Our results show that many less developed EU countries achieve efficiency but not effectiveness in tertiary education. The opposite is true for some developed countries. This is possible due to the low (high) educational inputs in less (more) developed countries. However, when we consider some quality indicators of outcomes/outputs, a few less developed EU countries, which were characterized as efficient in the quantity model, fail to reach defined efficiency border. On the other hand, some of the inefficient developed countries increase their DEA based ranking and achieve effectiveness (quality-based efficiency). It is not only that the quality of educational outputs matters for the results, but the same is true for input quality considerations. It turns out that some countries which were downgraded (upgraded) in the output quality DEA model have a lower (higher) quality student base as measured by PISA results. Since it could not be expected that tertiary education provides the same quality of outcomes with different input quality, efficiency improves (deteriorates) in the input-output quality-based model in many countries with a low (high) quality student base. Therefore, the results confirmed our hypothesis that quality considerations could significantly affect standard tertiary education efficiency analysis results. Any future research in this area should not evaluate tertiary education efficiency only in terms of the quantity measures of educational inputs and outputs. As already emphasized, the literature on economic growth and convergence long ago acknowledged educational quality as being more important than quantity. DEA based efficiency/effectiveness research should follow this example.

Future research should dig deeper into the rich set of models and results which DEA provides. Questions like: “what induces inefficiency in inefficient countries”(see figure A1 in appendix) and “which countries define the reference sets (rolemodels) for inefficient countries” (see tables A2-A4 in appendix) are especially important for countries like Croatia, which proved to be inefficient and ineffective regarding tertiary education. Research into the first question should illuminate potential financial black holes, while the answers to the second question could shed some light on good practices which could be (easily) implemented in Croatian education and customized for its needs. From the methodological point of view, any future research should address the issues of large numbers of variables, which result in too many efficient decision units (countries), as well as some timing and variable selection issues.

The key policy implication of our results suggests that greater emphasis should be put on the convergence of tertiary education effectiveness (and not efficiency) within the EU to enhance transmission of tertiary education outcomes into higher productivity and growth rates. However, since primary and secondary education define the “quality” of inputs at higher educational levels, such a policy task requires comprehensive educational reform in countries which are lagging behind. Nevertheless, it should be emphasized that the major limitations of the study follow from the limited data resources and some concerns about the quality of the data reported by Eurostat. The inclusion of data that do not properly represent the situation might significantly change the relative results of the analysis.



Appendix


Table A1
Previous research rankings (DEA)
DISPLAY Table

Figure A1
Input and output slacks of inefficient countries
DISPLAY Figure

Table A2
Reference set of a DMU from the quantity-based DEA model, by period
DISPLAY Table

Table A3
Reference set of a DMU from the output quality-based DEA model, by period
DISPLAY Table

Table A4
Reference set of a DMU from the input and output quality - based DEA model, by period
DISPLAY Table

Table A5
Efficiency scores obtained from all models, by period
DISPLAY Table



Notes


* The authors would like to thank two anonymous referees for helpful comments on the paper. 

The article was submitted for the 2018 annual award of the Prof. Dr. Marijan Hanžeković Prize.


1 Total enrolment in tertiary education (ISCED 5 to 8), regardless of age, expressed as a percentage of the total population of the five-year age group following on from secondary school leaving.

2 We won't go in any details regarding the broader usage of DEA in public sector efficiency evaluations. However, interested reader can refer to the following research in this area: Clements (2002), Afonso and St. Aubyn (2006), Aristovnik (2013a; 2013b), Aristovnik and Obadić (2014) etc.

3 See Table A1 in Appendix.

4 For detailed explanation of variables see Aubyn et al. (2009).

Table A1 in Appendix provides a table with the previous research results.

6 See Table A1 in Appendix.

7 Due to data shortages Cyprus, Greece, Malta and Luxemburg were excluded from the dataset.

8 We excluded Cyprus, Malta, Luxemburg and Greece from the analysis due to the lack of data.

9 Some additional explanation on the BCC and other DEA models can be found in, for example, Cooper, Seiford and Tone (2006) or Cooper, Seiford and Zhu (2011).


Disclosure statement


No potential conflict of interest was reported by the authors.

References


  1. Afonso, A. and St. Aubyn, M., 2006. Cross-country efficiency of secondary education provision: A semi-parametric analysis with non-discretionary inputs. Economic Modelling, 23(3), pp. 476–491 [CrossRef]

  2. Ahec Šonje, A., Deskar-Škrbić, M. and Šonje, V., 2018. Efficiency of public expenditure on education: comparing Croatia with other NMS. MPRA Paper, No. 85152.

  3. Altinok, N., Diebolt, C. and Demeulemeester, J.-L., 2014. A new international database on education quality: 1965–2010. Applied Economics, 46(11), pp. 1212–1247 [CrossRef]

  4. Aristovnik, A., 2013a. Technical Efficiency of Education Sector in the EU and OECD Countries: The Case of Tertiary Education. Conference Proceedings 16th Toulon-Verona Conference "Excellence in Services", Ljubljana, 29 – 30 August 2013, pp. 43–51.

  5. Aristovnik, A., 2013b. Relative efficiency of education expenditures in Eastern Europe : a non-parametric approach. Journal of Knowledge Management, Economics and Information Technology, ScientificPapers.org, 3(3), pp. 1–4.

  6. Aristovnik, A. and Obadić, A., 2011. The funding and efficiency of higher education in Croatia and Slovenia: a non-parametric comparison. Amfiteatru Economic, 13(30), pp. 362–376.

  7. Aristovnik, A. and Obadić, A., 2014. Measuring relative efficiency of secondary education in selected EU and OECD countries: the case of Slovenia and Croatia. Technological and Economic Development of Economy, 20(3), pp. 419–433 [CrossRef]

  8. Aubyn, M. S. [et al.], 2009. Study on the efficiency and effectiveness of public spending on tertiary education. European Economy, Economic Papers, No. 390.

  9. Banker, R. D., Charnes, A. and Cooper, W. W., 1984. Some Models for Estimating Technical and Scale Inefficiencies in Data Envelopment Analysis. Management Science, 30(9), pp. 1078–1092.

  10. Banker, R. D. and Morey, R. C., 1986. Efficiency Analysis for Exogenously Fixed Inputs and Outputs. Operations Research, 34(4), pp. 513–521 [CrossRef]

  11. Barro, R., 2001. Human Capital and Growth. The American Economic Review, 91(2), pp. 12–17.

  12. Barro, R., 2013. Education and Economic Growth. Annals of Economics and Finance, 14(2), pp. 301–328.

  13. Bourguignon, F., Elkana, Y. and Pleskovic, B., 2007. Capacity Building in Economics Education and Research. Washington, DC: World Bank.

  14. Charnes, A., Cooper, W. W. and Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research, 2(6), pp. 429–444 [CrossRef]

  15. Clements, B. J., 2002. How efficient is education spending in Europe? European review of economics and finance, 1(1), pp. 3–26.

  16. Cooper, W. W., Seiford, L. M. and Zhu, J. (eds.), 2011. Handbook on Data Envelopment Analysis. Cham: Springer Science and Business Media [CrossRef]

  17. Cooper, W. W. and Seiford, L. M., Tone, K., 2006. Introduction to Data Envelopment Analysis and Its Uses. New York: Springer [CrossRef]

  18. Emrouznejad, A. and Yang, G., 2018. A survey and analysis of the first 40 years of scholarly literature in DEA: 1978–2016. Socio-Economic Planning Sciences. Recent developments on the use of DEA in the public sector, 61(March), pp. 4–8 [CrossRef]




  19. Eurostat, 2018d. General government expenditure by function (COFOG). Luxembourg: Eurostat.


  20. Eurostat, 2018f. Tertiary education graduates. Luxembourg: Eurostat.





  21. Fortunato, P. and Panizza, U., 2015. Democracy, education and the quality of government. Journal of Economic Growth, 20(4), pp. 333–363 [CrossRef]

  22. Hanushek, E. A. and Kimko, D. D., 2000. Schooling, Labor-Force Quality, and the Growth of Nations. American Economic Review, 90(5), pp. 1184–1208 [CrossRef]

  23. Kenny, J., 2008. Efficiency and Effectiveness in Higher Education: Who Is Accountable for What? Australian Universities’ Review, pp. 50(1), 11-19.

  24. Kuosmanen, T., 2009. Data Envelopment Analysis with Missing Data. The Journal of the Operational Research Society, 60(12), pp. 1767–1774 [CrossRef]

  25. Pritchett, L., 2001. Where Has All the Education Gone? The World Bank Economic Review, 15(3), pp. 367–391.

  26. Pritchett, L., 2013. The Rebirth of Education: Schooling Ain’t Learning. Center for Global Development. Washington: Center for Global Development.

  27. Scheerens, J., Luyten, H. and van Ravens, J., 2011. Measuring Educational Quality by Means of Indicators in: J. Scheerens, H. Luyten and J. van Ravens, eds. Perspectives on Educational Quality. Springer Netherlands: Dordrecht, pp. 35–50 [CrossRef]

  28. Szirmai, A., 2015. Socio-Economic Development. Cambridge: Cambridge University Press.

  29. Times Higher Education, 2017. World University Rankings.

  30. Toth, R., 2009. Using DEA to Evaluate Efficiency of Higher Education. APSTRACT: Applied Studies in Agribusiness and Commerce, pp. 79–82.

  31. World Bank, 2018. Education Statistics-Gross enrolment ratio in tertiary education. Washington: The World Bank.

  32. Wößmann, L., 2006. Efficiency and Equity of European Education and Training Policies. CESifo Working Paper Series, (No. 1779).

  33. Yotova, L. and Stefanova, K., 2017. Efficiency of Tertiary Education Expenditure in CEE Countries: Data Envelopment Analysis. Economic Alternatives, (3), pp. 352–364.
  December, 2018
IV/2018
In order to give you a better user experience, cookies have been stored on your computer.
Accept cookie     More information