SciELO - Scientific Electronic Library Online

 
 issue77A 360-degree process improvement approach based on multiple modelsHelping organizations to address their effort toward the implementation of improvements in their software process author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Revista Facultad de Ingeniería Universidad de Antioquia

Print version ISSN 0120-6230

Rev.fac.ing.univ. Antioquia  no.77 Medellín Oct./Dec. 2015

https://doi.org/10.17533/udea.redin.n77a13 

ARTÍCULO ORIGINAL

 

DOI: 10.17533/udea.redin.n77a13

 

Assessment proposal of teaching and learning strategies in software process improvement

 

Propuesta de evaluación de estrategias de enseñanza/aprendizaje en mejora de procesos software

 

 

Bell Manrique-Losada, Gloria Piedad Gasca-Hurtado, María Clara Gómez Álvarez*

Grupo de Investigación ARKADIUS, Facultad de Ingenierías, Universidad de Medellín. Carrera 87 # 30-65. Medellín, Colombia.

* Corresponding author: María Clara Gómez Álvarez, e–mail: mcgomez@udem.edu.co

ISSN  0120–6230

e–ISSN 2422–2844

 

(Received March 06, 2015; accepted June 24, 2015)

 

 


ABSTRACT

Teaching and learning environments of software process improvement are incorporating new strategies for helping to decrease current weaknesses related to basic-sciences formation, motivation, and communication. One of such strategies includes the usage of gamification principles for achieving student competencies related to teamwork, problem solving, leadership, and effective communication. For assessing the teaching and learning strategies, it is required to consider the following features: i) the student competencies; ii) the didactic proposals and techniques; and, iii) the satisfaction level of the students, regarding the teaching strategy. In this paper, an assessment proposal of teaching and learning strategies in software process improvement is presented, with a preliminary validation method based on gamification principles. Finally, a case study for validating a didactic proposal is presented as a pilot of the assessment proposal.

Keywords: Assessment, teaching-learning process, software process improvement, gamification, competences


RESUMEN

En los ambientes de enseñanza y aprendizaje de mejora de procesos software se han incorporado nuevas estrategias para ayudar a disminuir debilidades actuales asociadas con la fundamentación en ciencias básicas, la motivación y la comunicación. Una de estas estrategias comprende el uso de principios de gamificación, los cuales buscan en el estudiante el desarrollo de competencias relacionadas con el trabajo en equipo, la resolución de problemas, el liderazgo y la comunicación efectiva. Para evaluar las estrategias de enseñanza y aprendizaje, es necesario considerar características como: i) competencias de los estudiantes; ii) propuestas y técnicas didácticas; y iii) niveles de satisfacción de los estudiantes respecto al proceso de enseñanza. En este artículo se presenta una propuesta de evaluación de estrategias de enseñanza/aprendizaje en mejora de procesos de software, junto con un método de validación preliminar basado en principios de gamificación. Finalmente, se presenta un caso de estudio para validar la propuesta didáctica, como piloto de la propuesta de evaluación.

Palabras clave: Evaluación, proceso de enseñanza-aprendizaje, mejora de procesos de software, gamificación, competencias


1. Introduction

Teaching and learning strategies involve activities and methods used to facilitate the achievement of learning outcomes in the students. Environments of software process improvement have been incorporating such strategies for helping to decrease current weaknesses related to basic-sciences formation, motivation, and communication. Commonly, these strategies are classified as traditional, dynamic, or own strategies [1], depending on the student role during the process and the achieved application level in real contexts.

Commonly, in the teaching and learning environments of software process improvement the own strategies are more used, focused on the course design about specific topics [2, 3]. One of such strategies includes the gamification which considers design principles by using game mechanics and game thinking to engage users in a specific context and to support problem solving processes. According to [4], some of such principles are: defining goals, orientating to challenges and quest; personalizing experience and progress; and, promoting feedback, competition, and cooperation. In software process improvement, the consideration of such principles stimulates student to develop a set of competencies related to teamwork, problem solving, leadership, and effective communication. Gamification has been called one of the most important trends in technology by several industry experts. Gamification can potentially be applied to any industry and almost anything to create fun and engaging experiences. For this reason, by using the principles of gamification, we propose to translate the traditional enthusiasm for play and social media engagement into the classroom, as a basis for succeeding and accelerating the learning process in students.

One of the most important components of teaching and learning process is the assessment [5]. Traditionally, the main method for assessing teaching is oriented to student evaluation based on the difficulties associated with precise assessments of teaching effectiveness. Lately, evaluation methods are receiving increasing attention [6, 7] looking for a set of subjects to consider in a precise assessment design. In the literature, we found several approaches defining features for assessing in the teaching and learning process. Summarizing, some of them are: i) the student competencies; ii) the didactic proposals and techniques; and, iii) the satisfaction level of the students.

In this paper, an assessment proposal of teaching and learning strategies in software process improvement is presented. Our approach is based on learning domains of the Bloom taxonomy [8], describing a three-domain structure: the cognitive and psychomotor domain in the first component of our proposal, the instrumental domain in the second component, and the affective domain in the third component. Each component of the assessment proposal is based on templates for guiding the questionnaires application as data collection instruments for measuring students' perception regarding the strategy and the satisfaction in the learning experience.

We develop a preliminary validation based on the gamification principles, by following a case study. Such a case consists in the assessment of a didactic proposal for teaching defect management in the context of Team Software Process (TSP). This proposal is designed like a game—training strategy—for facilitating the introduction of the basic concepts of defects management. The results show promising aspects of the student experience as high levels of enjoyment, good level of difficulty, and an important closeness to the reality of the proposal. Such aspects show the relevance of the application of gamification principles in didactic proposals for achieving significant student learning.

The paper is organized as follow: in the following section we present the conceptual framework and background about teaching and learning process and strategies for software process improvement. Then, the assessment proposal of teaching and learning strategies in software process improvement is presented. Later, we present a case study for validating the proposal and we analyze the results. Finally, in the last section the conclusions and future work are presented.

2. Conceptual framework and background

2.1. Teaching and learning strategies in the software process improvement

Software process improvement is an important topic in the context of software engineering, since it is focused on how to measure the software development process, the errors density in the products, and the programmers' productivity [9].

According to [1], the teaching and learning strategies for software process improvement are classified in the following categories:

  1. Traditional teaching-learning strategies: Strategies comprise lectures where the basic concepts are presented. In such strategies, the student is a passive actor of the learning process, so it is difficult to achieve an integrated application view of the presented concepts.
  1. Dynamic teaching-learning strategies: Strategies aimed at helping students to experiment the application of concepts in an environment closer to the reality (e.g. simulated environment or case studies). For the students, the main difficulties in the usage of these strategies are achieving a balance between the work scope and the independent work.
  1. Own teaching-learning strategies: The own strategies are focused on the courses design about specific topics (i.e., PSP/TSP courses have reported experiences presenting diminutions in the software product defects by during the course development) [2, 3].

Gamification is the process oriented to use game mechanics and thinking for involving users and solving problems. The usage of gamification principles in the teaching and learning strategies of software process improvement is a growing trend, looking for increasing the student motivation and developing competencies like teamwork, problem solving, leadership, and effective communication [10-12]. These principles are being applied in non-game applications to make them more fun and engaging [13]. According to [4], some principles commonly used are: defining goals, orientating to challenges and quest; personalizing experience and progress; and, promoting feedback, competition, and cooperation.

Some examples of the game mechanics used in gamification that could be included in the higher education domain are: positive feedback, points accumulation, obtaining badges, increased visibility of status, recognition of progress and pleasant surprises [14].

2.2. Assessment framework and taxonomies

Assessment has been considered one of the most important components of teaching and learning process [5]. According to [15], assessment is a continuous and participative process for measuring the evolution of student learning and making decisions to improve the design and development of the teaching processes. The evaluation is a judgment by the instructor about whether the instruction has met its learning outcomes. By following a contextualized and sound assessment, to monitor the success of a program/course/subject for achieving intended learning outcomes is possible. In this sense, we can determine: i) what students have learned; ii) the way they learned the material; and, iii) their approach to learning before, during, or after an activity [16].

Based on the difficulties associated with precise assessments of teaching effectiveness, student evaluations traditionally have been the primary mean for assessing teaching, mainly in higher education. Lately, evaluation methods are receiving increasing attention [5-7] looking for a set of subjects to consider in their design.

In the literature, we found several approaches for defining the role of assessment in the teaching and learning process. [17] proposes two key components within a framework of an instructional design model: the first is for identifying the types of learning outcomes desired in the students; Secondly, an instrument or method to obtain this evidence is designed and used as the means of evaluating student learning accomplishment. In the same sense: i) [18] state the domains covered by the evaluation of teaching and learning: evaluation of the practice of teaching, the evaluation of the students capacity to learn, and the way of teaching is received; ii) [19] recommends to evaluate three aspects of teaching: planning, implementation, and results; iii) [20] develop an original measure of learning in higher education and propose evaluation techniques including an overall score, which is the principal measure of student evaluation and measures of perceived learning, preparation and organization, the instructor attitude, and the extent to which the course stimulated students to think. Also, they use the results analysis of the determinants of student evaluations to suggest improved methods for evaluating instructors.

2.3. Competences and competence-based assessment

Currently, within processes of curricula transformation in formal and informal educative environments, new training trends based on competences are being implemented. Such an approach implies an assessment of programs, plans, new learning, and instruments to ensure the quality of the results that the educational systems are achieving. Assessments should measure the knowledge and skills needed to function in realistic contexts, and it is really authentic when to examine of student performance on worthy intellectual tasks is reached.

According to [21], a competence is the capacity for using the necessary resources to respond efficiently to a complex situation within a specific context. In the teaching and learning process, competencies comprise a set of knowledge, skills, and attitudes describing the learning outcomes of a program/course/subject [22].

Several approaches have been found as background of competence-based assessment, as follows: [23] shows the importance of contextual aspects in the analysis of achievement tests and the construction of contextual scales for each skill. In the CDIO Initiative, a framework to develop engineering education is presented, to meet the requirements for a modern engineer and to emphasize the student ability for being able to engineer [24]. CDIO stands for Conceive-Develop-Implement-Operate and it is a program for preparing engineering students for the forthcoming challenges by integrating competences related to product development projects; [25] defines a set of key competences for networks and working groups in engineering; [26] design and assess educational objectives by applying new competencies taxonomy. Particularly, the competence-based assessment in software engineering has been considered a critical activity and approaches like the Competence Web-based Assessment Framework Specification proposed by [27] are trying to overcome the underlying difficulties.

3. Assessment Proposal

In Figure 1, we present an overview of the assessment proposal. Our approach is based on the Bloom taxonomy of learning domains, according to the revision and grasp by [8]. As such taxonomy describes a three-domain structure; we consider the cognitive and psychomotor domain in the first component of our proposal, and the affective domain in the third component. The second component is directly oriented to the teaching proposal/method. This approach is justified in the excellent structure for planning, designing, assessing, and evaluating training and learning effectiveness provided by such framework. We expect our assessment approach serves as a base model to ensure that training and assessment be planned to deliver all the necessary development for students, and a template by which you can assess the validity and coverage of any existing method.

In the Figure 2, we show the steps for implementing the assessment proposal. A workflow where the application of didactic proposal is the first step for measuring the student satisfaction level and evaluating developed competences is presented.

3.1. Competence-based assessment

Bloom Taxonomy underpins the classical Knowledge, Attitude, & Skills structure of learning method. Such structure is organized in the cognitive and psychomotor domains. The cognitive domain comprises the following categories: knowledge, comprehension, application, analysis, synthesis, and evaluation. We define the previous categories as learning levels of templates supporting the assessment, as we show in the Table 1. Besides, we include in such a template the following features for complementing the assessment: evidence features and rubric features. The evidence features are specifying:

  1. The abilities/skills related to each learning level. For reference, in the table we include examples of skills related to each learning level.
  2. The application level of competence, which is accomplished with the strategy, in real contexts and situations to solve it. We propose application scales (categories) to facilitate the assignation, as follows: Low/Medium/High; Superior, Advanced, Intermediate, Novice.
  3. The learning level achieved is proposed for assigning a percentage or value to the current learning level in proportion with which the student should achieve (ideal); we propose an interval [X from Y].

Finally, we include in the assessment template a kind of rubric. [28] defines the rubric as an evaluation tool for assessing student compliance degree in a work or activity. Research on regulated learning and feedback suggests that learning improves when feedback directs students to monitor their learning and shows them how to achieve learning objectives [29]; [30] point out that rubric increases the satisfaction level of the students and is clearly beneficial for both teachers and students. The rubric features comprise: i) the subject, detailing the specific topic being assessed; and, ii) the weight for each subject defined by the instructor, representing a score/percentage which quantitatively assesses the achievement of such competence.

3.2. Didactic proposal assessment

The didactic proposal refers to the instructional components of the guide material for implementing a didactic activity in class. The goal of this assessment component is identifying improvement aspects of the activities proposed in the learning and teaching strategies, associated to features such as: quality, preparing, planning, organization, and design of the didactic proposal [20].

Such assessment component is designed based on a collecting data method [30] and it is adapted as follows: a) Define the objective of collecting data method; b) Select and design the method to collect the needed data; c) Apply the designed method; and d) Obtain and analyze data.

A questionnaire is a method widely used to collect data in science research [31], despite the disadvantages as the possibility of low responses rates and the questionnaire design which makes difficult to examine complex issues and opinions. We use this method to validate the didactic proposal as a recommendation of the research quantitative process, considering the level of student engagement with the didactic activities and the specificity level of issues under study. The questionnaire was developed in two sections: in the first section, the perception of the didactic proposal is identified, based on the Likert scale. Such perception indicates the level and the approval factor of the participants in a measure scale of five points. In the second section, we define questions associated to the individual usage perception of the student. These questions are designed by using a measure scale of two options (Yes or No).

The questionnaire features facilitate the data analysis according to the participant perception to the activity supported by the didactic proposal. A model of the proposed questionnaire is shown in the Table 2.

In the following section, we describe the assessment component oriented to the level of student satisfaction to the teaching process.

3.3. Student satisfaction assessment

This component is designed following the previous method [31] for measuring the satisfaction in the student experience.

The questionnaire was developed in two sections: in the first section, the experience and the satisfaction level of the student according to the process learning are measured, in a Likert scale of five points. In this section, we define six main features for establishing the following measure aspects of the teaching and learning process: overall score of the activity, enjoyment, closeness to reality of the activity, difficulty, satisfaction regarding to instructor attitude, and how the activity stimulates the student to think [20]. In the second section, the questions are focused on measuring and analyzing the student opinion. For this reason, open-ended questions are defined for measuring aspects such as: concepts learned according student, steps/strategy/method used by the students to achieve the activity goals, and student suggestions for improving the activity. A model of the proposed questionnaire is shown in the Table 3.

The previous questionnaire is designed for evaluating the usability features regarding student experiences. When students should learn some dense issues in a particular subject, designing a new teaching and learning strategy is often necessary. In the next section, we present a case study for assessing the student satisfaction component proposed in this paper. Such a case study is implemented by using a new didactic proposal for teaching defect injection and prevention in software engineering, in the framework of a teaching strategy based on gamification principles.

4. Case study

The case study consists in the assessment of a didactic proposal for teaching defect management in the context of the Team Software Process (TSP). Such didactic proposal is designed like a game for facilitating the introduction of the basic concepts of defects management in academic and enterprise environments as a training strategy.

The assessment method is a questionnaire for measuring the student satisfaction regarding the didactic proposal. For applying the questionnaire a previous pilot with professors of the discipline was developed. The questionnaire method was applied using non-probabilistic sample by means of snowball technique by identifying the individuals (students) can participate in the validation of the didactic proposal. For the questionnaire application, thirty students from systems engineering with the following features were selected: (1) Minimum 3rd semester and (2) Accumulative average greater than 75%.

4.1. Case study description

The didactic proposal is based on an analogy between paper boats construction and software development. The validation was carried out in a two-hour session, through the following steps:

  1. After the instructions explanation of the didactic proposal, the participants are organized in teams with well-defined roles (leader, developer, and tester) (Duration: 20 minutes).
  1. The teams build paper boats during four phases by simulating software-development phases (incremental development). Each phase has a feedback space about the defects identified on the boats based on the principles of software verification (Duration: 5 minutes by phases 1 and 2 ; 15 minutes phase 3; 20-minute phase 4. Total: 45 minutes)
  1. The tester of each team takes quality metrics for the products (boats) by following a verification guide and including some acceptance criteria of products software development. In addition, the tester calculates some quality metrics in a defects registration form. This activity is required in each phase seeking to meet the evolution of teamwork in terms of performance and productivity (Duration: 20 minutes).
  1. Discussion of team results (Duration: 20 minutes).
  1. The students fulfill the questionnaire for assessing the didactic proposal (Duration: 15 minutes).

The didactic proposal goals, in terms of the teaching and learning process, are: (1) recognizing the roles and responsibilities in a software development team, according to the TSP methodology; (2) identifying the business requirements by phase and the importance of its verification; and, (3) understanding the basic formulas to measure defects and their utility for achieving software quality. The required materials for the application of the didactic proposal are: paper sheets, pencils, the origami instructions form for the boats construction, and the defects registration form.

The didactic proposal has a set of rules for orientating the team activities conformed—as software development companies hiring a testing firm to the product quality verification. It is important due to such rules promote the competence among the teams indicating that the client will select only one company as a product provider according to the quality of its products. The rules of the game are as follows:

a. The participants organize teams of three members with a specific role (leader, developer, or tester). The leader coordinates the work and explains to other members the activities to do. The developer builds the boats by phase, according to origami instructions form and the tester is responsible of the software quality verification through metrics of defects management.

b. The instructor defines and explains to team leaders the acceptance criteria of the products. Such criteria are related to the quality of folds, the uniformity of the boats tips, wrinkles, lines, and paper breaks. Also, the instructor delivers the necessary material for the boats construction (paper sheets, origami instruction forms, verification guides, and defects registration forms).

c. The participants build boats during four phases. In phases 1 and 2, five low complexity units of boat should be constructed. Subsequently, during the phases 3 and 4, five units of high complexity boat should be constructed. The instructor emphasizes in the quality of final product (boats) instead of the quantity of units.

d. Each developer builds the defined quantity of boats, considering aspects such as acceptance criteria and the production /time metrics previously established.

e. In each phase, at the end of the time of product construction, the developers deliver the boats to the testers for verifying the acceptance criteria and the defect identification. This verification is similar to the software testing process planned during the design phase.

f. The tester and the leader of each team fill the defects registration form with the verification metrics identified in the previous step. In this case, for each acceptance criterion the tester marks the column OK or Error (e.g. if the boat has one no uniform tip, the tester marks the column Error in this acceptance criterion).

g. The tester and the leader calculate the following metrics for the defects management. For the phases 1 and 3 the formula (1) is used, and for phases 2 and 4 the formula (2) is used.

Where, DIP = Defects injected in the phase, CT = Construction time, TPE = Total phase errors and PT = Phase Time (Construction time + Verification time)

Where, DRP = Defects removed in the phase, TPrevPE = Total previous phase errors and TCurrPE = Total current phase errors

h. At the end of each phase the tester presents a feedback to the workteam showing the boat errors and the metrics related to injected and removed defects in the phase.

i. In the last part of the didactic proposal execution, the instructor promotes a space for reflection about the learned concepts, the strategy/steps/method for obtaining the best results, and the lessons learned for the teamwork.

4.2. Case study results

In this section, the results of the questionnaire for assessing student satisfaction regarding to the didactic proposal are presented. The questionnaire was conducted in courses of software engineering at the Universidad de Medellín during the semesters 02-2013 and 01-2014.

The first student group participating in the questionnaire comprised students from 2th and 3th year of software engineering. Such students were in courses focused on Software Project Management and Information Management. The second student group was composed by students in their last year at the university and they were in courses focused on Software Process Improvement and Software Quality Assurance.

Descriptive statistics to the data collected with the questionnaire are applied. The first analyzed variable is the level of enjoyment where the 53% of participants give a high score (''very good – 4'') (see Table 4). The level of enjoyment is a feature related to the gamification goals; for this reason, it is relevant the identification of such a level perceived by the students from the didactic proposal. According to results presented in Table 4, the students consider a very funny learning by using several strategies complementing the conventional classroom scenarios.

Other analyzed variable is associated with level of the difficulty of the didactic proposal in comparison to game features (see Table 4). For this variable, the questionnaire results show a good level of difficulty related to didactic proposal rules easily understood by the students. So, the students achieve a high degree of assimilation of initial instructions for a good development of the didactic proposal in the classroom.

Finally, the variable closeness to reality of the didactic proposal is analyzed. This variable is represented in the proposal by using a phases distribution as a simulation of the software development process. In accordance with the questionnaire results, the 50% of the students argue that the level of closeness to reality is very good – (4), against to 30% of students that assign a good level. This result is very important for our researchers because of the collection of quality metrics for defects management is an expensive and time–consuming activity. Additionally, the phase distribution of the didactic proposal is an opportunity to show students, in a funny manner, the costs associated with the defects measurement and the early defects elimination generating a special interest in students.

Finally, the questionnaire has three open-ended questions and their answers were tabulated according to the similarity of the student responses. Such questions analyze variables such as: learned concepts, strategies/steps/method to obtain the best results in the activity and the suggestions for improving it. From the collected answers with the questionnaire we find the following results:

  1. Concepts learned: 37% of students indicate learning of software quality concepts, 27% express their learning is oriented to identify the importance of teamwork, and 17% consider the didactic proposal as useful for recognizing the importance of quality metrics in the software process development.
  1. Strategies/steps/method: 60% of students define as the most important principle in the team, the boat construction with the maximum quality, but not the construction of large quantity of boats.
  1. Suggestions: 30% of students assure not including any change in the activity, however, the 20% of students consider limited the time by phase. Moreover, 17% of participants recommend presenting the instructions in a clearest way and defining a training time for the teams before the activity. Finally, 10% consider important to implement the didactic proposal as a videogame for decreasing the use of paper sheets.

5. Conclusions and future work

Gamification constitutes an alternative for using the game principles in teaching software engineering. In this way, we can exploit some games features such as: motivation, representativeness, and dynamism. The gamification as a technique has been successfully used in education and social settings such as marketing or politics, with high levels of motivation and participant involvement.

The application of teaching and learning strategies requires an assessment approach for measuring the evolution of student learning and enhancing the educational activities conducted by teachers. In this paper, we present an assessment proposal based on the Bloom taxonomy including three components: i) competence-based assessment comprising learning levels, evidence features, and rubrics features; ii) didactic proposal assessment based on questionnaires as a data collection method, for measuring students perception regarding to the strategy; and iii) student satisfaction assessment, also based on a questionnaire for obtaining the student perception about the experience and satisfaction.

We validate the student satisfaction assessment component of our proposal, by implementing a case study. The assessed didactic proposal in such a case study is for teaching defects management in a software product. The results show interesting aspects of the experience as higher level of enjoyment, good level of difficulty, and an important closeness to the reality regarding the didactic proposal. Such variables evidence the relevance of the application of gamification principles in didactic proposals for achieving significant learning.

As future work we identify: i) to analyze the viability for implementing the didactic proposal suggestions proposed by students, and (2) to apply the overall assessment proposal in an academic environment for validating the potential to measure the evolution of student learning and the effectiveness of the didactic proposals used in the classroom.

6. References

1.  R. Casallas, J. Dávila and J. Quiroga, ''Enseñanza de la ingeniería de software por procesos instrumentados''.IEEE Software, vol. 16, no. 6, pp. 51–57, 1999.         [ Links ]

2.  P. Runeson, ''Experiences from teaching PSP for freshmen'', in 14th Conference on Software Engineering Education and Training, Charlotte, USA, 2001, pp. 98-107., vol. 3, pp. 1–5, 2005.         [ Links ]

3.  D. Groth and E. Robertson, ''It's all about process: project-oriented teaching of software engineering'', in 14th Conference on Software Engineering Education and Training, Charlotte, USA, 2001, pp. 7-17.IEEE Software, vol. 20, no. 5, pp. 86–93, 2003.         [ Links ]

4.  D. Dicheva, C. Dichev, G. Agre and G. Angelova, ''Gamification in Education: A Systematic Mapping Study'', Educational Technology & Society, vol. 18, no. 3, pp. 1-14, 2015.         [ Links ]

5.  A. Vivar et al., ''Application of rubric in learning assessment: a proposal of application for engineering students'', in 1st International Conference on Technological Ecosystem for Enhancing Multiculturality, Salamanca, Spain, 2013, pp. 441-446.         [ Links ]

6.  U.S. Department of Education, ''A Test of Leadership: Charting the Future of U.S. Higher Education'', U.S. Department of Education, Washington, D. C., USA, Report, Sep. 2006        [ Links ]

7.  D. Golden, ''Colleges, accreditors seek better ways to measure learning'', Wall Street Journal, 2006.         [ Links ]

8.  L. Anderson, D. Krathwohl and B. Bloom, A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives, 1st ed. Boston, USA: Allyn & Bacon, 2001        [ Links ]

9.  A. Dorling and F. McCaffery, ''The gamification of SPICE'', in 12th International Conference Software Process Improvement and Capability Determination (SPICE), Palma de Mallorca, Spain, 2012, pp. 295-301.         [ Links ]

10.  W. Honig, ''Teaching Successful 'Real-World' Software Engineering to the 'Net' Generation: Process and Quality Win!'', in 21st Conference on Software Engineering Education and Training (CSEET), Charleston, USA, 2008, pp. 25-32.         [ Links ]

11.  G. Taran, ''Using games in software engineering education to teach risk management'', in 20th Conference on Software Engineering Education and Training (CSEET), Dublin, Ireland, 2007, pp. 211-220.         [ Links ]

12.  C. Zapata and G. Awad, ''Requirements Game: Teaching Software Project Management'', CLEI Electronic Journal, vol. 10, no. 1, 2007.         [ Links ]

13.  R. Raymer, Gamification: Using Game Mechanics to Enhance eLearning, 2011. [Online]. Available: http://elearnmag.acm.org/featured.cfm?aid=2031772. Accessed on: Feb. 13, 2015.         [ Links ]

14.  M. Wu, The Magic Potion of Game Dynamics, 2011. [Online]. Available: http://lithosphere.lithium.com/t5/Lithium-s-View/The-Magic-Potion-of-GameDynamics/ba-p/19260. Accessed on: Feb. 13, 2015.         [ Links ]

15.  N. Alart, ''La evaluación competencial''. Aula TIC, no. 30, pp. 1-3, 2010.         [ Links ]

16.  J. Biggs and C. Tang, Teaching for Quality Learning at University, 4th ed. Maidenhead, England: McGraw-Hill and Open University Press, 2011.         [ Links ]

17.  D. Metre, ''A Learning Theory for Economics Instructional Development'', The Journal of Economic Education, vol. 7, no. 2, pp. 95-103, 1976.         [ Links ]

18.  D. Potocki, I. Holmesland, M. Estrela and A. Veiga, ''The Evaluation of Teaching and Learning'', European Journal of Education, vol. 34, no. 3, pp. 299-312, 1999.         [ Links ]

19.  K. Pratt and R. Pallof, Making the Transition: Helping Teachers to Teach Online, 2000. [Online]. Available: http://eric.ed.gov/?id=ED452806. Accessed on: Feb. 13, 2015.         [ Links ]

20.  B. Weinberg, M. Hashimoto and B. Fleisher, ''Evaluating teaching in higher education'', The Journal of Economic Education, vol. 40, no. 3, pp. 227-261, 2009.         [ Links ]

21.  P. Perrenoud, Diez nuevas competencias para enseñar: invitación al viaje, 1st ed. Barcelona, Spain: Graó, 2007.         [ Links ]

22.  M. Rico, J. Coppens, P. Ferreira, H. Sánchez and J. Agudo, ''Everything Matters: Development of Cross-Curricular Competences in Engineering Through Web 2.0 Social Objects'', in Ubiquitous and Mobile Learning in the Digital Age, D. Sampson, P. Isaias, D. Ifenthaler and J. Spector (eds). New York, USA: Springer, 2013, pp. 139-157.         [ Links ]

23.  F. Martínez, A. Chaparro and L. Lizasoain, ''The socioeconomic index in the analysis of large-scale assessments: case study in Baja California (Mexico)'', in 2nd International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 2014, pp. 461-467.         [ Links ]

24.  E. Crawley, The CDIO Syllabus: A Statement of Goals for Undergraduate Engineering Education, 2001. [Online]. Available: http://www.cdio.org/framework-benefits/cdio-syllabus. Accessed on: Feb. 13, 2015.         [ Links ]

25.  D. Rychen and L. Salganik, ''Highlights from the OECD Project Definition and Selection Competencies: Theoretical and Conceptual Foundations (DeSeCo)'', in Annual Meeting of the American Educational Research Association, Chicago, USA, 2003.         [ Links ]

26.  R. Marzano and J. Kendall, Designing and assessing educational objectives: Applying the new taxonomy, 1st ed. Portland, USA: Corwin Press, 2008.         [ Links ]

27.  M. Ilahi, L. Cheniti and R. Braham, ''Formal competence-based assessment: on closing the gap between academia and industry'', in 2nd International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 2014, pp. 581-587.         [ Links ]

28.  E. Barberá and E. Martín, Portfolio electrónico: aprender a evaluar el aprendizaje, 3rd ed. Editorial UOC, 2009.         [ Links ]

29.  H. Andrade and B. Boulay, ''Role of Rubric-Referenced Self-Assessment in Learning to Write'', The Journal of Educational Research, vol. 97, no. 1, pp. 21-34, 2003.         [ Links ]

30.  H. Andrade and Y. Du, ''Student perspectives on rubric-referenced assessment. Practical Assessment'', Research & Evaluation, vol. 10. no. 3, pp. 1-11, 2005.         [ Links ]

31.  R. Hernández, C. Fernández and P. Baptista, Metodología de la investigación. México, D. F., México: McGraw-Hill, 2010.         [ Links ]