SciELO - Scientific Electronic Library Online

 
vol.10 special issue 70Working With People (WWP) in Rural Development Projects: a Proposal from Social LearningEvaluation of Development Projects: a Process-Centered Approach in the Outskirts of Lima,Peru author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Cuadernos de Desarrollo Rural

Print version ISSN 0122-1450

Cuad. Desarro. Rural vol.10 no.spe70 Bogotá Jan. 2013

 

The Worldwide Expansion of Evaluation: a World of Possibilities for Rural Development

La expansión de la evaluación a nivel internacional: un mundo de posibilidades para el desarrollo rural

L'expansion mondiale de l'évaluation: un monde de possibilités pour le développement rural

Pablo Vidueira*
José M. Díaz-Puente**
Ana Afonso***

*Msc. Researcher at Departamento de Proyectos y Planificación Rural, Universidad Politécnica de Madrid. E-mail: pablo.vidueira@upm.es
**PhD. Assistant Professor at Departamento de Proyectos y Planificación Rural, Universidad Politécnica de Madrid. E-mail: jm.diazpuente@upm.es
***PhD. Director of Publications Department. Romanian National Rural Development Network(NRDN) - Support Unit (NSU). E-mail: aafonso@rndr.ro

Recibido - Submitted - Reçu: 2012-05-25 • Aceptado - Accepted - Accepté: 2012-05-26 • Evaluado - Evaluated - Évalué: 2012-08-08 • Publicado - Published - Publié: 2013-03-30


Cómo citar este artículo

Vidueira, P., Díaz-Puente, J. M., & Afonso, A. (2013). The Worldwide Expansion of Evaluation: a World of Possibilities for Rural Development. Cuadernos de Desarrollo Rural, 10 (70), 159-180.


Abstract

The growing importance of the evaluation field is exemplified by its increasing use in international organizations and public policies at a regional, national and local level, as well as by the exponential growth of evaluation societies around the world. Focusing on rural development, this expansion is due to the utility of evaluation in providing evidence to help in the decision-making and improve interventions, as well as being a value system that establishes criteria and explicit standards by which to judge these interventions. This paper aims to provide clarity on these aspects through a comprehensive analysis of the evolution and the current situation of the evaluation field. The literature reviewed and our evaluation experiences confirm the evaluation's ability to provide a scientific basis to the decisionmaking in rural development programmes.

Keywords author: Evaluation networks and associations, rural development interventions, improvement, decision-making, values.

Keywords plus: Public policy, rural development policy, program development, decision-making,


Resumen

La importancia de la evaluación se concreta en su creciente uso en las organizaciones internacionales y en las políticas públicas regionales, nacionales y locales, así como en el crecimiento exponencial de asociaciones de evaluación en todo el mundo. Centrándose en el desarrollo rural, esta expansión se debe a la utilidad de la evaluación al proveer evidencias que ayudan a la toma de decisiones y mejoran las intervenciones, así como un sistema de valores que proporciona criterios y estándares explícitos con los que juzgar las intervenciones. Este artículo trata de dar relevancia a estos aspectos a través de un análisis profundo de la evolución y la situación actual del campo de la evaluación. La revisión bibliográfica y nuestra experiencia confirman las posibilidades que ofrece la evaluación al proporcionar una base científica para la toma de decisiones en los programas de desarrollo rural.

Palabras clave autor: Redes y asociaciones de evaluación, intervenciones en desarrollo rural, mejora, toma de decisiones, valores.

Palabras clave descriptores: Políticas públicas, proyectos de desarrollo rural, programas de desarrollo, toma de decisiones.


Résumé

La croissante importance de l'évaluation est illustrée par son utilisation de plus en plus fréquente dans les organisations internationales et dans les politiques régionales, nationales et locales, ainsi qu'au développement exponentiel des sociétés d'évaluation dans le monde. Mettant l'accent sur le développement rural, cette expansion est due à l'utilité de l'évaluation en fournissant des preuves qui facilitent la prise de décision et d'améliorer ainsi les interventions. De même, elle fournit un système de valorisation qui inclut des critères et des standards explicites pour évaluer ces interventions. Cet article tente de donner une pertinence à ces aspects à travers d'une analyse complète de l'évolution et la situation actuelle du domaine de l'évaluation. La révision bibliographique et notre expérience confirment les possibilités de l'évaluation pour fournir une base scientifique à la prise de décision dans les programmes de développement rural.

Mots-clés auteur: Réseaux et associations d'évaluation, interventions de développement rural, amélioration, prise de décision, valeurs.

Mots-clés descripteur: Politiques publiques, projets de développement rural, éveloppement de logiciels, prise de décision.


Introduction

Evaluation, as a professional practice, is defined as "theprocess of determining the merit, value and importance of things" (Scriven, 2005b). It involves individuals giving opinions, but should be backed up by real facts and objectives.

The professional practice of evaluation has become more prominent in the last forty years (Díaz-Puente et al., 2007), resulting in a number of fields. Some of these already play an important part in the development, and practical application, of evaluation methods (Scriven, 2005a). This involves the evaluation of products, such as consumer products; performance, such as examining students; proposals, in order to select the best option; personnel, in order to select the best candidates for certain roles; and finally, the areas most related to the evaluation of development: the evaluation of policies, plans, programmes and projects. This paper will focus on the evaluation of these interventions, based on the assumption that the objective of these activities is the improvement and development of the territories and their population.

There are different dimensions of evaluation and factors that can be evaluated (Scriven, 2005b). For the evaluation of development interventions it is not always necessary to use all of these, and the most useful will have to be chosen on an individual basis. In some cases, evaluation can be focused on the results of the intervention through the analysis of different effects: positive and negative, direct and indirect, and short, medium and long term. In other cases, the evaluation activities can be focused on the study of development processes, analyzing the application and management of the intervention. This tends to be more interesting than just the analysis of the results in order to propose improvements (Vela, 2003). The evaluation of costs is strongly linked to this dimension, where monetary and non-monetary costs, direct and indirect costs, current costs and opportunity costs can be analyzed. Another important dimension is the evaluation of the intervention logic. This one focuses on analyzing the planning carried out: the need, pertinence and consistency with the reality of the territory. There are two other dimensions of great interest with regards to the development field: comparative evaluation and generalization. The first of these compares the evaluated intervention to other interventions, in which similar benefits are expected, based on similar resource levels. In the latter, the evaluation seeks to analyze at which point the evaluated intervention (or some of its components) can be generalized in other conditions with similar results. The generalizations can be applied to other situations (physical, political, etc.), other personnel, other territorial or seasonal scales, other benefits, etc. In order to carry out this type of evaluation it is necessary to make predictions on the results of an intervention in different scenarios. Despite the risks with this type of prediction, this kind of evaluation can often be the most powerful for contributing to the improvement of development interventions.

This paper focuses on two key aspects. First, in the evolution of evaluation throughout its history and its expansion, and throughout the world thanks to the professional associations. This gives an idea of the current importance of this activity. Second, the main contributions of evaluation within the field of development interventions will be analyzed: to provide evidence for decision making and improving performance, in order to establish a value system with explicit criteria and standards to judge the interventions, and to provide methodologies and tools for analyzing, interpreting, comparing and generalizing the results, as well as learning from them. Nowadays, evaluation certainly provides important support for policies, programmes and projects to enable a sustainable and endogenous development processes in rural areas.

1. The evolution of evaluation

Nowadays, evaluation has evolved into a range of approaches and experiences across the world. It is worth understanding this vastness from the beginning, in order to distinguish the most useful components with regards to rural development evaluation.

1.1. The beginning of evaluation: The loss of the initial approach

Human beings have tried to resolve problems using reasoning and tests for centuries. However, evaluation as a professional discipline was born in the second half of the 1960s, to improve and help with the quality of interventions, in the context of the substantial investments in social programmes by the American Government. These investments did not manage to put a stop to the complex problems they aimed to resolve (W.K. Kellogg Foundation, 1998), and as a result, there was a growing pressure to show the benefits of the interventions, in order to proceed with the allocation of resources (Stone, 1985; Walters, 1996; Wye & Sonnichsen, 1992). There was a need for a tool that enabled the effective prioritization of investments, in order to make investment decisions (Patton, 1997).

This pressure, which started in the United States, spread to other countries where evaluation became a tool for justifying decisions. As a result, the initial objective of evaluation for improving the programmes was to demonstrate that they worked. In addition to the aforementioned pressure, the desire to incorporate a scientific method when carrying out the majority of evaluations has also contributed significantly to this situation (W.K. Kellogg Foundation, 1998). This method is suitable for analyzing efficiency, however it is very limited in terms of helping to evaluate the development interventions, since they are strongly related to intangible capital.

As a result of both these factors - the historic increase in pressure for demonstrating the efficiency of public policies and the realms of a model based on measuring change - many evaluation projects failed to address issues as important as the process, or the implementation and improvement of the programmes. Therefore, it was necessary to return to the evaluation's primary objective of improving interventions.

1.2. The expansion of evaluation in public policies

Since the 1960s, the United States has been characterized by its constant, methodological innovations and by an increased level of institutionalization with regards to the evaluation of the public policies and programmes. One of the factors that triggered this expansion was the successful incorporation of evaluation activities in the United States Department of Defence, through a programme for the evaluation of the efficiency of alternative programmes. This success led to the implementation across of all the federal government agencies. The clear institutionalization of evaluation in the United States occurred with the creation of evaluation units within federal offices and the announcement of laws that require the General Accounting Office (GAO) to carry out an analysis on the efficiency of the public programmes. In the field of methodological innovation and development, an evaluation institute was created within the GAO in 1980, known as the Program Evaluation and Methodology Division.

As a result, during the last three decades, the evaluation of programmes in areas as important as education or health has been a clear point of reference in public debates in the United States, generating valuable information for both the detractors and supporters of public intervention. A large part of this experience was expressed in a series of evaluation models designed by North American authors, who are concerned with finding methodological designs that go beyond one kind of evaluation that is simply based on the achievement of planned objectives. This provided a value base for making and justifying decisions, which could also enable improvements in public actions (Stufflebeam & Shinkfield, 1985).

The expansion of evaluation across other developed countries arose - as was the case in the United States - as a result of the introduction of budgetary reforms and the development of welfare and social cohesion policies. In Europe, this expansion also arose due to the development of Communitarian Administration (Ballart, 1992; Román, 1999). In other developing countries, the expansion often arose as a result of the participation in programmes financed by international organizations such as the UN, World Bank, FAO, etc.

In 1990, Hans-Ullrich Derlien described the expansion and development of the public policy evaluation and identified a series of factors for characterizing a group of pioneering countries that adopted evaluation in the 1970s (United States, Canada, Sweden, Germany and the United Kingdom). These countries formed what he called the first expansion. He also identified a second group of European countries that, in the 1980s, was part of the second expansion: Denmark, Holland, Norway and Switzerland. This is shown in figure 1.

According to Patton (1999) "Evaluation is a culture1" that is shared by evaluators and all those in contact with their work. In the last 10 years, there has been an unprecedented expansion of this evaluation culture in the majority of countries, generating huge opportunities and challenges (Furubo et al., 2002; Díaz-Puente et al., 2007; IOCE, 2012a).

In Europe, evaluation has extended to those countries in central and southern Europe that have not had an evaluation culture. In the rest of the world, it has also extended to Africa, Latin America and Asia. In order to satisfy these increased requirements, the number of consultancies, bodies and universities dedicated to evaluation began to grow (Love & Russon, 2000), and continues to do so.

The emergence of evaluation as a professional practice has been particularly important in Europe, where it has been institutionalizing itself since 1988 in the application and management of public policies. The European legislation introduces the requirement to evaluate programmes co-financed by structural funds, therefore, a large number of evaluations have taken place in all the member states across a wide range of activities. The reasons for this expansion can be found in the huge usefulness of evaluation and in its key role in planning public policies.

The institutionalization of evaluation requires a broader and more unified knowledge base, as well as an adequate direction, that drives the improvement in the evaluative processes, resulting in improvements in the public policies applied in the territory, such as rural development ones.

1.3. The important contribution of the development field

The main international bodies (such as the World Bank, the International Fund for Agricultural Development and the Food and Agriculture Organization of the United Nations) responded to the difficulties of following and evaluating their interventions in the developing countries, by creating the first manuals and initial guides for promoting the introduction of an evaluation culture (Casley & Kumar, 1990).

These efforts and the imposed requirements for evaluating projects financed by these organizations, led to the creation of the first evaluation associations in developing countries. Some of the most important milestones are the following: (1) The organization by the Inter-American Development Bank and the International Fund for Agricultural Development of the first evaluation seminar in Central America and the Dominican Republic, which took place in San Jose, Costa Rica, in 1994. It led to the creation of the first regional evaluation association2 in Central America: Central American Evaluation Association (CEA). (2) In 1996, the United Nations (through IFAD) sponsored PREVAL, a programme to strengthen the regional capacity for monitoring and evaluating in Latin America and the Caribbean. Nowadays PREVAL is an international platform that advises governments, and rural organizations in order to strengthen their ability to design and develop systems for Planning, Monitoring and Evaluation (PREVAL, 2012). (3) "The African Evaluation Association (AfrEA) was created as an informal network facilitated by UNICEF Eastern and Southern Africa Regional Office (ESARO)" (Patton, 1999). The Inaugural Conference of the African Evaluation Association was held in Nairobi on the 13th-17th September 1999, and was attended by over 300 evaluators from 35 countries. (4) Efforts from the UNDP (United Nations Development Programme) and the World Bank for the future establishment of an International Association for Development Evaluation resulted in the creation of IDEAS (International Development Evaluation Association) in 2002.

1.4. The creation of a professional expanding field

Another relevant effort to achieve the unified knowledge base required for the institutionalization of evaluation was carried out in 1975 in the United States. The objective was to develop professional standards for evaluating the programmes. In order to achieve this, a coalition of major professional associations concerned with the quality of evaluation was created. The Joint Committee on Standards for Educational Evaluation (JCSEE, 2012) continues to grow and work.

The result of this project was the "Standards for Evaluations of Educational Programs, Projects, and Materials" published in 1981, reviewed in 1994 (as The Program Evaluation Standards), and the 3rd edition is now available.

Nowadays, the Programme Evaluation Standards (PES) consist of a quality control list of evaluation projects, made up of 30 criteria arranged in five categories: utility, feasibility, propriety, accuracy and accountability, which are the five standards required for an evaluation (Yarbrough et al., 2011).

In 1989, the American National Standards Institute approved the first PES, and it is currently used for the majority of the evaluations. In some areas with significant cultural differences, there are working groups adapting the PES in these contexts. Donor agencies have also adopted the PES for assessing the quality of project evaluation in developing countries.

A clear indicator of the growth of the evaluation culture in an international context is the exponential increase of evaluation associations and networks in recent years3. There are currently over 150 national and regional4 evaluation organizations (IOCE, 2012a)5. Most of them have been consolidated and have contributed to the creation of an international evaluation community (Lundgren, 2000; Mertens & Russon, 2000; Picciotto, 2003; Mertens, 2005; IOCE 2012a). Part of the work of these organizations consists of developing standards, values and principles in order to guide evaluation. There are also organizations with other objectives and they are focused more on spreading the evaluation culture and on developing skills, experience and methods. The directives developed by the evaluation societies do not have regulative characteristics; rather they are recommendations from various professionals on how evaluation should contribute to society, respecting the population that is being worked with.

Since the foundation of the Canadian Evaluation Society (CES) in 1981 and the American Evaluation Association (AEA) in 1986, there has been a steady increase in the number of new foundations. In 1995 there were 6 regional and national evaluation organizations [see appendix]. In 1998 this increased to 12 and in 1999 there were over 20 (including new types of organizations such as networks and forums, as well as associations). Since 2003 the number of regional evaluation associations has remained constant, although the number of national organizations has increased significantly, from 20 in 2003 to 122 currently (IOCE, 2012a). Figure 2 shows the current regional associations according to IOCE data. National associations are detailed in the appendix.

The European Evaluation Society (EES) was founded in 1994. Since then, it has played a crucial role in promoting and creating different national societies that exist in almost all the European countries [see appendix]. Only the United Kingdom Evaluation Society existed before the European Evaluation Society was created. There is another regional evaluation association and another network in Europe: DeGEval (the regional association including Germany and Austria) and NESE (Network of Evaluation Societies of Europe) (IOCE, 2012b).

In 1999, the African Evaluation Association (AfrEA) was created as an informal evaluation network. Subsequently, various African countries created their own national evaluation associations or networks. The number of associations increased from 6 in 1999 to 16 in 2001, and currently stands at 40 (IOCE, 2012a). Furthermore, another thirteen countries participate (without their own evaluation societies) in the AfrEA's activities. It is also worth mentioning that Africa is the continent with the biggest amount of regional evaluation organizations, with a total of four: African Evaluation Association (AfrEA), Africa Gender and Development Evaluation Network (AGDEN), African Community of Practice on Managing for Development Results (AfricaCop-MfDR) and Monitoring and Evaluation East Africa Network (Mandeea) (IOCE, 2012b).

In Asia, the first associations were established in Israel in 1998, Malaysia in 1999 and Sri Lanka, also in 1999. In 2003, three new associations were formed, as well as two forums. Currently, evaluation has spread to 23 Asian countries [see appendix] (IOCE, 2012b), some of which are particularly relevant, such as China or India. There have also been advances in the creation of regional associations. There are currently 4 associations: Australasian Evaluation Society (AES), South-East Asia Community of Practice for M&E of Climate Change Interventions (SEA Change), Community of Evaluators (CoE SA) and International Program Evaluation Network (IPEN) (IOCE, 2012a). It is important to note that, despite having four regional associations, the Asian area is comprised of two continents (Asia and Oceania), meaning that Africa remains the continent with the largest number of these associations.

In Latin America, the international organizations have played an important part in creating evaluation associations. The two most important elements with regards to this have been the creation of the Central American Evaluation Association (ACE) in 1994, and the creation of PREVAL in 1996. These two regional associations prompted the creation of a new evaluation network in 2003: Red de Seguimiento, Evaluación y Sistematización de América Latina y el Caribe (ReLAC) which plays and important role in supporting the international evaluation conferences (ReLAC, 2012).

1.5. The internationalization of evaluation

The creation of an international evaluation community represents a large opportunity for everyone to learn from each other and become more successful in the profession evaluation (Love & Russon, 2002). In November 1995, the American and Canadian evaluation societies organized a conference in Vancouver, in collaboration with the European Evaluation Society. This was the first truly international conference, with 1600 evaluators from 65 countries from the five continents. This conference was a decisive moment in the creation of an international evaluation community (Patton, 2001).

In 2000, thanks to a donation from the W.K. Kellogg Foundation, a meeting that took place in Barbados, in which representatives from 15 national and regional evaluation associations from across the world took part. During the meeting, a formal proposal was produced for creating an international evaluation organization, which culminated in the creation of the International Organization for Cooperation in Evaluation (IOCE) in 2003.

The IOCE was created as a flexible organization comprising of national and regional evaluation entities. Its objective was to strengthen the leadership and evaluation capability in developing countries, to promote the links between the theory and practical application of evaluation across the world and to promote evaluation as a profession. The overall goal was to create a global vision for identifying and proposing solutions for the development problems around the world.

On the other hand, the assessors specializing in development have a discussion forum at their disposal for sharing knowledge and experiences, as well as promoting the quality of their work, it is known as the International Development Evaluation Association (IDEAS). As previously mentioned, this association was founded in 2001 thanks to the support of international bodies such as the United Nations Development Program (UNDP) and the World Bank.

There are currently eleven new international and thematic associations. Seven of these are focused on rural development interventions: Network of Networks on Impact Evaluation (NONIE), Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), International Organization for Collaborative Outcome Management (IOCOM), Evaluation Cooperation Group (ECGnet), DAC Network on Development Evaluation (OECD/DAC EvalNet), United Nations Evaluation Group (UNEG), and UNICEF. Two of these are focused on environmental impacts evaluation: Climate-Eval (GEF) and Environmental Evaluators Network (EEN). Finally, two are restricted to the francophone area: Organisation Internationale de la Francophonie (OIF) and Réseau francophone d'évaluation (RFE). Both constitute Le portail francophone de l'évaluation.

1.6. Current Approach

With the expansion of an evaluation culture, there have been many advances related to its processes, mechanisms, tools and results. Nowadays, there are professionals and associations in all the regions of the world, creating a critical mass of evaluators who are capable of responding to the increasing demand for these activities. Furthermore, technological advances (information and communication) facilitate the knowledge sharing, the cooperation and the creation of strategic alliances between the associations and their members.

For some time, evaluation was too heavily focused on the technical arena, with methodologies and instruments that originated from (and depended on) scientific investigation. During the last twenty years this has changed, and it has become more and more differentiated from rigorous control and strict investigation. Evaluation is evolving, and is now positioned as a prominent tool for improving the execution, management and transparency of policies, programmes and projects.

2. Main contributions: Reasons for its expansion

The value of evaluation depends on the stance we take: the value may be in helping to interpret the results from a particular policy, programme or project (theoretical approach), obtaining evidence that the policy, programme or project works (evidential approach), or learning from implementation experiences (learning stance), although these stances are not completely independent. In any of these cases, evaluation is proving to be crucial for political transparency and for demonstrating the effectiveness of public management (Patton, 2001). The main contributions of evaluation in the rural development interventions can be summarized by the following three items.

2.1. Evaluation as a source of value and improvement

As previously mentioned, evaluation involves making judgments about the perceived value or importance of one intervention. As a result, the evaluator needs a value system with which to approach evaluation, a system that the evaluator should not impose, rather they should develop this jointly with those involved. Therefore, evaluation is based on value theories that help to judge facts, and practical theories that involve evaluation tools and methods.

As a result, evaluators analyze policies, programmes and projects from a very deep perspective, and are always trying to improve them (Gocht et al., 1994). Such improvement will be conducted using the results observed in the past evaluation exercises in order to introduce innovations that lead to improvements in the new interventions. An experienced evaluator will have analyzed a multitude of programmes, and will have the knowledge to improve others, in order to make them more effective and efficient. This is why evaluators are not only consulted for the evaluations, but also for the design of the programmes. In this field, a whole area of investigation has been created to establish what enables evaluation experts to be such experts. Part of this investigation involves the development and implementation of artificial intelligence in evaluation.

2.2. Evaluation as a source of evidence and its political implications

The fundamental question about evaluation is to identify what works and what is worth to support. As a result, all the evaluation work is political and carries a value burden (W. K. Kellogg Foundation, 1998). Furthermore, all phases of the evaluation process have political implications in terms of focusing on themes, decision making, in the population's perception of intervention, and particularly in the interests that are taken into consideration. It is important that the evaluators understand the implications of their actions and maintain a continuous dialogue with all the pressure groups involved (Díaz-Puente et al., 2008).

On the other hand, evaluation serves two other very important functions with regards to rural development interventions. First, it is a process that allows learning - in terms of what does and doesn't work - in order to improve interventions and meet the set objectives (Mokate, 2000). Second, evaluation plays a crucial role in decision making (and justifying these decisions), finding suitable methods for comparing, selecting and rejecting alternative projects in scenarios where resources are scarce (Cohen & Franco, 2006).

2.3. Evaluation as a source of learning and training

The evaluation activities usually have two approaches: the use of results -usually captured in a final evaluation report - and the use of process, which includes all the evaluation activities that achieve results. In the evaluation of rural development interventions, the use of an evaluation process is particularly interesting, in comparison to the sole use of results that is often used for demonstrating that interventions work.

The adequate use of the process - through an increased participation amongst agents - links the knowledge and results generated in the evaluation with knowledge acquisition processes amongst the population. These skills allow the population to make use of the evaluation tools to manage their development process, use them for continuous improvement and obtain evidence that they support decision-making. Furthermore, it can also help to overcome the natural resistance to change that can often arise. The use of the process in itself can have an impact on knowledge acquisition (Patton, 1999).

During the evaluation of development programmes it is important that, in addition to providing a judgment, there is a serious concern for triggering learning processes, since the results and reports come to an end, but the learning and skills acquired by those involved continue. However, the importance of using results should not be forgotten (Patton, 1997; 1998). The challenge for evaluators is to adapt both uses to each context.

Conclusion

The importance of the evaluation development programmes is clear, given its expansion across the world through a huge number of professional associations at a national, regional and global level. Although this expansion has reached many parts of the planet, there are still places where an evaluation culture is not sufficiently implemented in order to benefit from its value and potential opportunities.

These benefits justify its substantial expansion. This is important for two reasons. First, it allows resources to be allocated in the best possible way, adjusting them strictly to the determined criteria. Given the current context of economic crisis this feature becomes hugely important. Second, evaluation allows a solid evidence to be given that justifies the chosen option, with the importance that it has on the current context for reinforcing and maintaining democratic systems.

Evaluation is also a source of values that allows criteria and explicit standards to be established, that can be used to judge, analyze, interpret, compare and generalize the results of a particular action, as well as to improve interventions, through teaching (based on the use of process and results) and the application of different methodologies and tools that can be adapted to the context of the intervention being evaluated.

The benefits mentioned, as well as the role of evaluation as a factor that supports the training of people, mean that this discipline plays an important and promising part in achieving sustainable and endogenous development processes in the territories.

Despite these previous considerations, there are some challenges that evaluation should respond to in the next few years. One of these is the evaluation of complex systems (Patton, 2010), where many actors, many variables and non-linear relationships make the evaluation quite messy. Rural development intervention is one of the fields in which all these advances could be crucial in order to exploit all the possible benefits from evaluation activities in the improvement of interventions. Another challenge is the centralization of evaluation activities, especially in rural development policies (High & Nemes, 2007), where financing agents ask for the evaluation of the whole programme, which sometimes represents a much larger scale than the scale applied in the implementation of the programme. All these challenges (amongst others that do not appear in this paper) will make the evaluation culture and its expansion more useful in the improvement of interventions.

Appendix

This Appendix provides information on the various evaluation organizations created during the expansion process.

The associations listed in the first paragraph are those that were created first, between 1981 and 1995. The Canadian Evaluation Society was the first national association, and was founded in 1981. According to the latest data provided on their web site, they have 1,750 individual Canadian members as well as 103 international members and 71 libraries (CES, 2003); unfortunately, data provided is reflective of the situation as of June 30th, 2003. The American Evaluation Association was founded in 1986; currently has around 5500 members and represents 26 different American associations (AEA, 2012). The Australasian Evaluation Society (AES) was the first regional evaluation association. It was founded in 1991 and currently has over 1000 members (AES, 2012). Subsequently, the creation of evaluation associations started in Europe and Latin America. In 1994 the United Kingdom Evaluation Society (UKES) was created (UKES, 2012) as well as the European Evaluation Society (EES) (EES, 2012). In 1995, as a result of the first Evaluation Seminar for Central America, Panama and the Dominican Republic, promoted by the Inter-American Development Bank and the International Fund for Agricultural Development, the Central American Evaluation Association (ACE) was founded.

As a result of the creation of the European Evaluation Association (EEA) in 1994, national associations were created, and there are now 28. There are associations in Albania (Société albanaise d'évaluation de programme), Belgium (Flemish Evaluation Platform in Flanders and Société wallonne de l'Evaluation et de la Prospective in Wallonia), Czech Republic (Czech Evaluation Society), Denmark (Danish Evaluation Society), Finland (Finnish Evaluation Society), France (La Société Française de l'Évaluation), Iceland (Icelandic Evaluation Society), Ireland (Irish Evaluation Network), Italy (Associazione Italiana di Valutazione), The Netherlands (Dutch Evaluation Society ), Norway (Norwegian Evaluation Society), Poland (Polskie Towarzystwo Ewaluacyjne), Portugal (Portuguese evaluatlon society), Romania (Romanian Evaluation Network), Scotland (Scottish Evaluation Network), Slovakia (Slovak Society for Evaluation), Slovenia (Slovensko drustvo evalvatorjev), Spain (Sociedad Española de Evaluación), Sweden (Swedish Evaluation Society), Switzerland (Schweizerische Evaluationsgesellschaft), Turkey (Turkish Evaluation Association ) and the UK (United Kingdom Evaluation Society).

In Latin America, the creation of national evaluation associations also continued. There are now 17 of them, which are the following: Red Argentina de Evaluación, Red de Monitoreo y Evaluación Boliviana, Brazilian Evaluation Agency, Brazilian Association of Educational Evaluation, Brazilian M&E Network, Red Chilena de Evaluación, Red Colombiana de Sistemas de Información, Planeación, Seguimiento, Evaluación y Sistematización, Cuban Evaluation Network, Red de Evaluadores de la República Dominicana, Evaluadores Ecuador, Red de Evaluación en El Salvador, Red de Evaluación en Guatemala, Red Hondureña de Profesionales de la Evaluación, Seguimiento y Sistematización, Red Nicaragüense de Seguimiento y Evaluación, Red Paraguaya de Evaluación, Red Peruana de Seguimiento y Evaluación, Red Uruguaya de Evaluadores, Grupo Seguimiento, Evaluación y Sistematización (Venezuela) (IOCE, 2012a).

In Africa, following the creation of the African Evaluation Association (AfrEA) in 1999 (AfrEA, 2012), associations were created in Kenya, Niger and Ghana. The Kenyan association is the oldest, and most influential in Africa, and is the model on which other African countries base theirs. There are also evaluation associations in Benin (Réseau Béninois de Suivi-Evaluation), Botswana (Botswana Evaluation Association), Burkina Faso (Réseau Burkinabé de Suivi et d'Evaluation), Burundi (Burundi Evaluation Network), Cameroon (Cameroon Development Evaluation Association), Cape Verde (Cape Verde Evaluation Network), Comoros (Association Comorienne de Suivi et Evaluation), Democratic Republic of Congo (Association Congolaise de Suivi et Evaluation), Ivory Coast (Réseau Ivoirien de Suivi et Evaluation), Egypt (Egyptian Development Evaluation Network and Evaluation and Research Network in Egypt), Eritrea (Eritrea Evaluation Network), Ethiopia (Ethiopian Evaluation Association), Ghana (Ghana Evaluators Association, Ghana Evaluators Network and Ghana M&E Forum), Guinea (Association Guinéenne de Suivi-Evaluation and Association Guinéenne des Évaluateurs), Israel (Israeli Association for Program Evaluation), Jordan (Monitoring and Evaluation Society of Jordan), Kenya (Community of Evaluators for M&E Solutions and Evaluation Society of Kenya), Madagascar (Malagasy Association pour le Suivi et l'Evaluation), Malawi (Malawi M&E and Malawi Network of Evaluators), Mali (Association pour la Promotion de l'Evaluation au Mali), Maurantania (Association Mauritanienne du Suivi et de l'Evaluation et Réseau Mauritanien de Suivi-Evaluation and Association Mauritanienne du Suivi et de l'Evaluation), Morroco (Réseau Marocain de Suivi et Evaluation and L'Association Marocaine de l'Evaluation), Namibia (Namibia Monitoring, Evaluation and Research Network), Niger (Le Réseau Nigérien de Suivi et Evaluation), Nigeria (Monitoring and Evaluation Network of Nigeria, Nigerian Evaluation Association and Society for Monitoring and Evaluation, Nigeria), Rwanda (Réseau sénégalais de l'évaluation, Rwanda Evaluation Society and Rwanda Monitoring and Evaluation Network), Sénégal (Réseau sénégalais de l'évaluation), South Africa (South African Evaluation Network), Swaziland (Swazi evaluation network (being formed)), Tanzania (Tanzania Evaluation Association), Tanzania (Zanzibar) (Zanzibar M&E Association), Uganda (Northern Uganda M&E Network and Ugandan Evaluation Association) and Zambia (Zambia Evaluation Association), Zimbabwe (Zimbabwe Evaluation Society) (IOCE, 2012a).

The following were created in Asia in 2003: The Japanese Evaluation Society (JES), the Thailand Evaluation Network, the Korean Evaluation Association and the Bangladesh Evaluation Forum. There are currently new evaluation societies in Afghanistan (Community of Evaluators / Afghanistan), Azerbaijan (Azerbaijan Evaluation Network), Bangladesh (Bangladesh Evaluation Network), China (China Enterprise Evaluation Association), Georgia (Georgia Evaluation Association), India (with three evaluation organizations: Indian Evaluation Network, India Monitoring and Evaluation Learning and Action Network, and the Development Evaluation Society of India), Indonesia (Indonesian Development Evaluation Community), Kazakhstan (Kazakhstan Evaluation Association), Kyrgyzstan (National Monitoring and Evaluation Network of the Kyrgyz Republic), Malaysia (Malaysian Evaluation Society), Nepal (Community of Evaluators and Nepal Evaluation Society), New Zealand (New Zealand Evaluation Association), Papua New Guinea (PNG Association of Professional Evaluators), Philippines (Philippines Monitoring and Evaluation Society), Pakistan (Pakistan Evaluation Network), Sri Lanka (Sri-Lanka Evaluation Association), Tajikistan (Tajikistan M&E Community of Practice), and Ukraine (Ukrainian Evaluation Association) (IOCE, 2012a)


Foot Note

1The concept of "evaluation culture" is covered extensively in articles such as (Trochim, 1992; Guijt, 2000 and Rajeswari, 2003). The latter presents an interesting synthesis of all the aspects previously developed around the concept of evaluation culture.
2ACE was the first regional association in America, but the second in the world. The first one was the Australasian Evaluation Society (AES).
3The geographical division considered in this section is the same as that applied by the IOCE (International Organization for Cooperation in Evaluation).
4The term "regional" refers to the supranational area. This definition is not common in Spain, but is widely used in the rest of the world.
5All the information contained in this article from the IOCE website was updated until the latest revision of this article, in August 2012. However it is important to emphasize that, according to the IOCE website, there will be a major upgrade of its database in mid-September 2012 through the survey carried out by Eval Partners.


References

American Evaluation Association (AEA) (2012). Local affiliates. Retrieved from http://www.eval.org/aboutus/organization/affiliates.asp        [ Links ]

African Evaluation Association (AfrEA) (2012). About the Conference. Retrieved from http://www.afrea.net/about.html         [ Links ]

Australasian Evaluation Society (AES) (2012). About the Australasian Evaluation Society. Retrieved from http://www.aes.asn.au/membership/        [ Links ]

Ballart, X. (1992). ¿Como evaluar programas y servicios públicos? Aproximación sistémica y estudios de caso. Madrid: Ministerio para las Administraciones Públicas.         [ Links ]

Canadian Evaluation Society (CES) (2003). About the Canadian Evaluation Society. Accessed on 23rd August 2012 at: http://www.evaluationcanada.ca/site.cgi?s=1 & ss=2 &_lang=en        [ Links ]

Casley, D. J., & Kumar, K. (1990). Seguimiento y evaluación de proyectos en agricultura. Madrid: Mundi-Prensa.         [ Links ]

Cohen, E. and Franco, R. (2006). Evaluación de proyectos sociales (7a ed). Mexico DF: Siglo XXI Editores.         [ Links ]

Derlien, H. U. (1990). Genesis and structure of evaluation efforts in comparative perspective. In R.C. Rist (Ed.), Program evaluation and the management of government. Patterns and prospects across eight Nation (pp. 147-177). New Brunswick: Transaction Publishers.         [ Links ]

Díaz-Puente, J. M., Cazorla, A. and Dorrego, A. (2007). Crossing national, continental, and linguistic boundaries: Toward a worldwide evaluation research community in journals of evaluation. American Journal of Evaluation, 28 (4), 399-415.         [ Links ]

Díaz-Puente, J. M., Yagüe, J. L., & Afonso, A. (2008). Building Evaluation Capacity in Spain. A Case Study of Rural Development and Empowerment in the European Union. Evaluation Review, 32 (5), 478-506.         [ Links ]

European Evaluation Society (EES) (2012). About EES. Retrieved from http://www.europeanevaluation.org/about-ees.htm        [ Links ]

Furubo, J. E. Rist, R. C., & Sandahl, R. (2002). International Atlas of Evaluation. Comparative Policy Analysis Series. New Brunswick: Transaction Publishers.         [ Links ]

Guijt, I. (2000). Methodological issues in participatory monitoring and evaluation. In M. Estrella (Ed.), Learningfrom Change - Issues and Experiences in Participatory Monitoring and Evaluation (pp. 201-216). London: IDRC/ITP.         [ Links ]

Gocht, W., Hewitt, A., & Hoebink , P. (1994). The comparative effectiveness and the coordinate efforts of EU donors. The Hague: NAR.         [ Links ]

High, C. and Nemes, G. (2007). Social Learning in LEADER: Exogenous, Endogenous and Hybrid Evaluation in Rural Development. Sociologia Ruralis, 47 (2), 103-119        [ Links ]

International Organization for Cooperation in Evaluation (IOCE) (2012a). IOCE's master list of evaluation organizations. Retrieved from http://www.ioce.net/members/reg_intl_organizations.shtml         [ Links ]

International Organization for Cooperation in Evaluation (IOCE) (2012b). Regional or International Evaluation Organizations. Retrieved from http://ioce.net/members/reg_intl_organizations.shtml        [ Links ]

Joint Committee on Standards for Educational Evaluation (2012). About JCSEE. Retrieved from http://www.jcsee.org/about        [ Links ]

Love, A. J., & Russon, C. (2000). Building a worldwide evaluation community: Past present, and future. Evaluation and Program Planning, 23 (4), 449-459.         [ Links ]

Love, A. J., & Russon, C. (2002). International evaluation: the way forward. In: Canadian Evaluation Society National Office of Ottawa (Ed.), The Canadian Evaluation Society Newsletter. Ottawa: CES National Office.         [ Links ]

Lundgren, H. (2000). A proposal for an International Organization for Co-operation in Evaluation [IOCE]. Evaluation, 6 (4), 481-485.         [ Links ]

Mertens, D. M. (2005). The inauguration of the International Organization for Cooperation in Evaluation. American Journal of Evaluation, 26 (1), 124-130.         [ Links ]

Mertens, D. M., & Russon, C. (2000). A Proposal for the International Organization for Cooperation in Evaluation. American Journal of Evaluation, 21 (2), 275-283.         [ Links ]

Mokate, K. M. (2000). El Monitoreo y La Evaluación: herramientas indispensables de la gerencia social. Washington D.C.: Instituto Interamericano de Desarrollo Social.         [ Links ]

Patton, M. Q.J (1997). Utilization-Focused Evaluation: The New Century Text (3rd. ed.). Thousand Oaks: Sage Publications.         [ Links ]

Patton, M. Q.J (1998). Discovering process use. Evaluation, 4 (2), 225-233.         [ Links ]

Patton, M. Q.J (1999). Utilization-Focused Evaluation in Africa. Evaluation training lectures delivered to the inaugural conference of the African Evaluation Association. Nairobi: Prudence Nkinda Chaiban.         [ Links ]

Patton, M. Q.J (2001, April). Remarks to the Canadian Evaluation Society. National Capital Chapter, Annual General Meeting. Canada: Banff.         [ Links ]

Patton, M. Q.J (2002). Evaluation Worldwide. Canadian Evaluation Society News Letter, 12-14.         [ Links ]

Patton, M. Q.J (2011). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: The Guilford Press.         [ Links ]

Picciotto, R. (2003). International trends and development evaluation: The need for IDEAS. American Journal of Evaluation, 24 (2), 227-234.         [ Links ]

Plataforma Regional de Desarrollo de Capacidades en Evaluación y Sistematización de América Latina y el Caribe (PREVAL) (2012). Acerca de PREVAL. Retrieved from http://preval.org/es/content/acerca-de-preval        [ Links ]

Rajeswari, S. R. (2003). Disciplines, institutions and organizations: impact assessments in context. Agricultural Systems, 78 (2), 185-211        [ Links ]

Red de Seguimiento, Evaluación y Sistematización de América Latina y el Caribe (ReLAC) (2012). ReLAC_presentation.pdf. Retrieved from http://noticiasrelac.ning.com/        [ Links ]

Román, C. (1999). Una estrategia de desarrollo económico para Andalucía. Sevilla: Instituto de Desarrollo Regional, Fundación Universitaria.         [ Links ]

Scriven, M. (2005a). Key evaluation checklist (KEC). Accessed on 30th May 2011 at: http://preval.org/documentos/2071.pdf        [ Links ]

Scriven, M. (2005b). Logic of evaluation. In S. Mathison (Ed.), Encyclopedia of evaluation (pp. 235-238). Thousand Oaks: Sage Publications.         [ Links ]

Sociedad Española de Evaluación (SEE) (2011). Acerca de la Sociedad Española de Evaluación de Políticas Públicas. Retrieved from http://www.sociedadevaluacion.org/website/index.php?q=about        [ Links ]

Stufflebeam, D.L. & Shinkfield, A.J. (1985). Evaluation. Boston: Kluber-Nijhoff Publishing.         [ Links ]

Stone, C. N. (1985). Efficiency versus social learning: A reconsideration of the implementation process. Policy Studies Review, 4 (3), 484-496.         [ Links ]

Trochim, W.M.K. (1992). Developing an evaluation culture for international agricultural research. In Lee, et al. (Eds.) Assessing the Impact of International Agricultural Research for Sustainable Development. Ithaca: Cornell Press.         [ Links ]

United Kingdom Evaluation Society (UKES) (2012). The UK Evaluation Society. Retrieved from http://www.evaluation.org.uk/about-us/about-ukes        [ Links ]

Vela, R. (2003). Hacia un nuevo enfoque de la evaluación de impacto de proyectos de desarrollo rural. Cuadernos de Desarrollo Rural, 50, 125-142.         [ Links ]

Walters, J. (1996). Auditor power! Governing, 25-29.         [ Links ]

W. K. Kellogg Foundation (1998). The W. K. Foundation Evaluation Handbook. Philosophy and Expectations (Updated in 2004). Michigan: Battle Creek. Retrieved from http://www.wkkf.org/knowledge-center/resources/2010/W-K-Kellogg-Foundation-Evaluation-Handbook.aspx.         [ Links ]

Wye, C. G. and Sonnichsen, R. C. (1992). Editor's notes. New Directions for Program Evaluation, 55, 1-10         [ Links ]

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks: Sage        [ Links ]