//Logo Image
Authors: Yeh-Liang Hsu (2012-01-12); recommended: Yeh-Liang Hsu (2012-11-04).
Note: This paper is published at Global Journal of Engineering Education (GJEE), Vol. 14, No. 3, pp. 219~224, 2012.

Implementing peer evaluation to improve students’ participation in oral presentations in a large mechanical design class

Abstract

Arranging oral presentations has always been very difficult and ineffective in our large project-based engineering design class of about 110 students each year. This study explores the scheme of implementing peer evaluation to improve students’ participation and learning outcome in the oral presentation sessions. Basically, students were asked to grade and comment on oral presentations by other students in a pre-defined manner. The effect of oral presentations with/without the intervention of peer evaluation was compared. Our data and questionnaire results showed that, with careful design of the format, peer evaluation indeed improved students’ participation in oral presentations, naturally leading to more serious and positive learning attitude, and eventually a better learning outcome.

Keyword: project-based learning, guided design, peer evaluation, oral presentation

1.       Introduction

Engineering design is a practical problem-solving profession and a critical element in engineering education. As well as gaining domain knowledge and the ability to use engineering tools, the training to deal with real engineering problems is an important learning process for engineering students. In engineering design courses, design projects are commonly adopted as a means for students to experience design, and “project-based learning” [1] has become the main stream thinking. Students can learn more from working on design projects than sitting in lectures, and therefore pedagogical approaches such as project-based learning or problem-based learning have been developed in many engineering design courses. Project-based learning also provides opportunities for interdisciplinary learning, and has the potential to enhance student participation, motivation, and learning effectiveness [2, 3].

The author has been teaching a junior level mechanical design course in a large class setting (about 110 students each year) for the past 18 years. Though the class is large, we still try to incorporate project-based learning in this course. Students were grouped into about 30 teams to work on their design projects. For example, the design project for the first semester of the 2008 academic year asked the students to design a ping-pong ball shooting robot to participate in a robot tic-tac-toe competition at the end of the semester. One problem the authors have to deal with in this large class setting is that, the professor and TAs do not have enough time to coach the 30 design teams individually. Therefore, an adapted “guided design procedure” is built into this mechanical design course [4]. Guided design is a structured way to lead the students to advance their design projects step by step through a specific problem-solving or design procedure [5].

Oral presentation is a good training for students to communicate their design concepts, and can also be used to assess the students’ performances in design projects. Students can also exchange design information, experiences, and ideas with one another over the presentation sessions. In our large mechanical design class, we also utilize oral presentations for design project management. The term project is divided into 4 step-by-step small projects in our adapted guided design procedure. Each project is part of a step-by-step system design procedure which is either to develop the specifications, to generate various design concepts for a subsystem, or to do certain analysis or evaluation for the design. Oral presentations of the 4 small projects are used as design reviews of the term design project for each design team. It is crucial that the students receive evaluations and useful comments from the professor, TAs, and their fellow students following the oral presentation of each small project, so they can stay on the right track and keep up with the schedule of the term design project.

However, arranging oral presentations can be very difficult and ineffective in a large engineering design class such as ours. The practical problems encountered in our experience in the past years are as follows:

Ÿ We had to invest a large portion of class time in the oral presentations of 30 teams, but students’ participation in the oral presentation sessions was low. Most students focused on their own presentations and did not pay attention to other teams’ presentations. The learning effect from listening and interacting with other students in the presentation sessions was much lower than expected.

Ÿ Instead of speaking to the general audience, students often prepare their oral presentations specifically for the professor who does the grading. Very often, the professor was the only intended audience for the students when making presentations. Thus, the learning effect from preparing and making the presentations is also limited.

Ÿ Students complain that the grading by the professor alone is subjective and unfair.

In a problem-solving process, there might be multiple potential solutions or alternatives for a design. Emerging pedagogical approaches such as cooperative learning or peer assessment/evaluation are well suited to engineering design courses [3]. Peer evaluation has been documented as an important element in project teamwork [6], and has been widely used in various project-based engineering courses in civil engineering [7-9], industrial design [10], nanotechnology engineering [11], science education [12], to name a few. It was reported that students welcomed structured feedback from peer-evaluation for a project, and the findings by William et al. [13] also suggested that professors should structure peer-feedback during a project with peer-evaluation at the end of a project. For the purpose of managing large classes, O'moore described the “Peer Assessment Learning Sessions (PALS)” to address the challenge of providing frequent, efficient and timely assessment for large classes of 100-200+students while simultaneously enabling and providing high quality formative feedback [9].

In addition to project-based engineering courses, the implementation of peer evaluation/assessment receives much attention in teacher education for the development of assessment skills and reflection skills of the student teachers [14-15]. Peer evaluation/assessment is also commonly used in health professional education, in which promoting professional behaviors and interpersonal communication skills are crucial elements [16, 17]. The development of assessment skill, reflection skill, professional behavior, and interpersonal communication skill are certainly also important for engineering education.

This study explores the scheme of implementing peer evaluation/assessment to improve students’ participation and learning outcome in the oral presentation sessions of our large mechanical design class. Basically, students were asked to grade and comment on oral presentations by other students in a pre-defined manner. Section 2 of this paper describes our scheme for implementing peer evaluation in oral presentations, and the concerns we had asking students to perform the grading. To answer those concerns, an experiment was designed to explore the effects and proper formats of peer evaluation. The grading by the students and questionnaire results were analyzed in Section 3. Finally Section 4 concludes our findings in this study.

2.       Implementing peer evaluation in oral presentations

As described in the previous section, the term design project (such as the design of a ping-pong ball shooting robot) is divided into 4 step-by-step small projects in the adapted guided design procedure for our large engineering design class. Oral presentations of the 4 small projects are used as design reviews of the term design project for each design team of 3 to 4 students. Each student is required to make at least one oral presentation during the semester. Details about the format of the oral presentations are as follows:

Ÿ The oral presentation given by each team is limited to 5 minutes. As well as saving class time for 30 team presentations, this requirement also intends to train the students to present their work more concisely and with better structure.

Ÿ Students are asked to grade the oral presentations by other teams. The students’ grading has a 50% impact on the final scores of the oral presentations, and the other 50% is given by the professor. In addition, the professor gives written comments to each team on their design work and oral presentation. The students can also provide their written opinions and responses to the presentations.

Ÿ The presentation scores range from 6 to 10 points, and the distribution of the scores of all the teams in the class is pre-determined in the scoring sheet. Take a 15-team session as an example, a student judge is allowed to give one 6-point, three 7-points, five 8-points, three 9-points and two 10-points, to the 14 other teams in the session.

About 1/3 of the students’ final grade of the semester depends on their performance in oral presentations. The student grading process described above naturally leads to some concerns of unfairness, real or perceived, in grading. Students may not have adequate experience or references to assess the quality of the projects. This was why a predetermined distribution of scores was imposed to students’ grading. However, the professor’s grading did not have to strictly follow the predetermined distribution of scores to account for a number of equally good or equally bad presentations.

In the beginning of the semester, the purpose and rules of the peer evaluation were clearly explained to the students. Student judges were not allowed to grade the presentation of their own teams, and they were asked to sign on their scoring sheets to indicate that they are responsible for the professional judgment of the grading. Students were also guaranteed that their scoring sheets were strictly confidential to the professor and TAs, and would not be released to any other students.

When implementing this new scheme in the class, we did have several questions in mind:

(1)  Is 5 minutes enough for an oral presentation?

(2)  Do the students have sufficient ability to make professional judgments?

(3)  Does this scheme enhance student participation and the learning outcome?

We conducted two trials in consecutive years, in which years students were asked to do a similar ping-pong ball shooting robot project, to evaluate the effectiveness of the new scheme for implementing peer evaluation in oral presentations. In the first year (Trial I), all students were divided into two sessions (Trial I-A and Trial I-B) for design projects and oral presentations. In Trial I-A, the grades of the oral presentations were given by the professor only. Peer evaluation was implemented in Trial I-B, in which all students had to give grades to all 4 presentations made by other teams. To avoid confounding experimental variables associated with participant selection, the assigning of students into Trial I-A or Trial I-B was purely decided by their student ID numbers.

The second trial (Trial II) was conducted in the following year. The number of student judges in an oral presentation was reduced to 1/4. Only one student from each team was asked to give grades. The 4 students in one team took turns to represent the team as the student judge, and every student had to serve as the student judge at least once throughout the 4 oral presentations in the semester. With fewer student judges, we were able to implement a short Q&A, in which each team was questioned by one student judge following the 5-minute presentation.

The grades given by both the professor and the students from Trial I-A and Trial II were compared to examine how close the students’ grading would conform to the professor’s grading. The coefficient of variation of students’ grading was also calculated and compared to evaluate the consistency of students’ grading.

At the end of the semesters following the 4 oral presentations, all students were asked to fill out a questionnaire which had not been announced in advance. The questionnaire was anonymous and therefore the students understood their answers and comments would not affect their final grades for this course. The questionnaire was intended to evaluate the level of students’ participation, attitude, “perceived fairness” of the grading process, and the learning outcome of the oral presentations and the mechanical design course in general, before and after peer evaluation/assessment of oral presentations was implemented.

3.       Analysis of Experiment Data

In this Section, data collected from Trial I and Trial II were analyzed to answer the questions raised in the previous section.

(1)  Is 5 minutes enough for an oral presentation?

To answer this question, we examined the number of slides used in the presentations and the actual length of the presentations. In Trial I and II, the number of slides in student presentations ranged from 7 to 22 pages, and the average was 11.0 pages. Though this average number of slides seems to be large for a 5-minute presentation, most teams were able to complete their oral presentations on schedule. The average length of the presentations was 4 minutes and 40 seconds.

(2)  Do the students have sufficient ability to make professional judgments?

Here we assumed the professor’s judgment is “professional” and compared the difference between the scores given by the professor and those given by the students. Table 1 shows the results from the four oral presentations (P1 to P4) from Trial I-B and Trial II. Correlation coefficient is a measure of linear association between two variables x and y. Positive values indicate a relationship between variables x and y such that as values for x increases, values for y also increase. A value of exactly +1 indicates a perfect positive fit. As shown in Table 1, the correlation coefficient between the scores given by the professor (x) and those given by the students (y) in the 4 oral presentations in Trial I ranged from 0.50 to 0.68. An even stronger correlation with coefficients ranged from 0.60 to 0.79 was obtained in Trial II, when the number of student judges was reduced to 1/4. The average difference between the scores by the professor and the average scores by the students was 0.81 in Trial I-B and 0.72 in Trial II.

These results indicate a large positive correlation between the professor’s and the students’ judgments, even though the professor’s grading did not have to strictly follow the predetermined distribution of scores. This finding is consistent with the results of similar researches comparing the scores from peer and teacher [20, 21]. Falchikov and Goldfinch conducted a meta-analysis on 48 quantitative peer assessment studies comparing peer and teacher marks. Peer assessments were found to resemble more closely teacher assessments when global judgments based on well understood criteria are used rather than when marking involves assessing several individual dimensions [21].

Further examination of the data reveals that, most of the larger differences occurred when the professor gave an extreme score (6 or 10) to a team. Students’ grading may have a balancing effect on the possible subjectivity of the professor’s judgment.

Table 1. The correlation and average difference between the professor’s and the students’ scores

Oral presentation

 

P1

P2

P3

P4

Average

Trial I-B

correlation coefficient

0.68

0.50

0.56

0.55

0.57

Difference in scores

0.78

0.73

0.85

0.89

0.81

Trial II

correlation coefficient

0.79

0.76

0.60

0.70

0.71

Difference in scores

0.92

0.70

0.64

0.64

0.72

To further explore the consistency of students’ grading, whether the better teams received higher scores by all student judges and vice versa, Table 2 shows the comparison of average coefficients of variation of scores of all design teams in the 4 oral presentations given by the student judges. The coefficient of variation, defined as the ratio of the standard deviation to the mean, is a normalized measure of dispersion of a distribution. It is a useful statistic for comparing the degree of variation from one data series to another, even if the means are different from each other. A computer program was written to generate the scores randomly but follow the grading rules as introduced in the previous session. The average coefficient of variation of the scores simulated by the computer program, 20.6%, represents the reference number if there is absolutely no consistency in the scores given to each team by the student judges. From Table 2, the students’ grading in Trial II seems to be more consistent than in Trial I-B (with a lower average coefficient of variation). In both trials, the coefficients of variation are far below the simulated scores by the computer program.

Table 2. Comparison of average coefficients of variation

Scores samples

Average coefficient of variation (%)

Random

20.6

Trial I-B

13.1

Trial II

11.0

(3)  Does this scheme enhance student participation and the learning outcome of the presentation sessions?

Table 3 shows the questions and results of the questionnaire given to the students at the end of the semesters in Trial I and Trial II. The students answered those questions by indicating the level of agreement in the following statements where 2=strongly agree, 1=agree, 0=neither disagree nor agree, -1=disagree, -2= strongly disagree.

The questions in the questionnaire were organized into 4 groups. Group 1 (Question 1~4) asked students for a self-evaluation of class participation. The results reveal that the students in Trial I–A, in which there was no peer evaluation, had the lowest class participation in their self-evaluation. Group 2 (Question 5~12) asked students about their attitude towards the oral presentation. With the peer evaluation scheme, students in Trial I-B and Trial II had a more serious and positive attitude than students in Trial I-A. They considered the response from other students when making the presentation (Question 6), and they seemed to pay more attention to the presentations by other teams (Question 9 and 10). Note that students in Trial II cared about their grades and comments a lot more than students in Trial I (Question 11 and 12).

Group 3 (Question 13~17) asked students about their feeling towards the grading. The “perceived fairness” towards the rules of grading improved slightly in Trial II (Question 13 and 14), though students in Trial I-B had stronger agreement on whether the grading is objective (Question 17). Group 4 (Question 18~24) asked students whether they had learned from making oral presentations. Again with the peer evaluation scheme, students in Trial I-B and Trial II were more positive about the learning outcome of making oral presentations. Note in Trial II, students had a significantly stronger agreement that their ability to express, communicate, and cooperate with other people improved because of the oral presentations.

Table 3. The results of the questionnaire

No.

Question

Trial I

Trial II

A

B

1

How much time did you usually spend on each design project? [1]

0.00

-0.12

0.53

2

How much did you participate in the design projects? [2]

0.91

1.08

1.11

3

Your attendance in the class? [3]

0.97

1.34

1.29

4

How many times did you attend the oral presentations?

4

85%

90%

81%

3

8%

4%

10%

2

0%

0%

1%

1

7%

6%

8%

5

I tried to satisfy the professor’s requirement when preparing oral presentations.

0.88

1.34

1.16

6

I considered the responses from other students when preparing oral presentations.

0.50

1.00

1.15

7

I was interested in other oral presentations, and I listened to them carefully.

0.90

0.95

1.08

8

The other classmates were interested in the oral presentation by your team.

0.31

0.60

0.81

9

I learned to do better design from listening to the oral presentations of other teams.

0.90

1.44

1.45

10

I can improve my oral presentation skills from listening to other presentations.

0.78

1.10

1.32

11

I cared about the grades of my team after project presentation.

0.77

1.06

1.44

12

I cared about the comments given by the professor following the project presentation

1.02

0.88

1.43

13

I feel it is fair to have a pre-determined distribution of scores for the presentations.

0.40

0.15

0.62

14

I feel it is fair to allow student to grade others.

0.56

0.45

0.92

15

I feel the grading of the oral presentations emphasized the design content.

0.46

0.65

0.69

16

I feel the grading of the oral presentation emphasized oral presentation skills.

0.46

0.65

0.90

17

I feel the grading for oral presentation is objective.

0.16

0.94

0.55

18

The comments given by the professor helped me understand the pros and cons of my team.

0.77

0.91

1.21

19

The grades of the oral presentation of my team met my expectation.

0.32

0.53

0.69

20

I am more serious about the design project because of the oral presentations.

0.62

1.26

1.23

21

I understand the professional knowledge better because of the oral presentations.

0.72

1.44

1.25

22

The oral presentations helped me enhance my ability to express and communicate.

0.78

0.94

1.25

23

The oral presentations helped me improve my ability to cooperate with other people.

0.74

0.90

1.25

24

I would rather make oral presentations than take exams.

0.77

1.07

0.92

[1]     2=12 hours or above; 1=9-12 hours; 0=6-9 hours; -1=3-6 hours; -2=3 hours or less

[2]     2=100%; 1=75%-100%; 0=50%-75%; -1=25%-50%; -2=25% or less.

[3]     2=100%; 1=75%-100%; 0=50%-75%; -1=25%-50%; -2=25% or less.

4.       Discussion and Conclusions

Oral presentation is an essential element of engineering design courses. However, arranging oral presentations has been very difficult and ineffective in our large project-based mechanical design class of about 110 students each year. This study explores the scheme of implementing peer evaluation to improve students’ participation and learning outcome in the oral presentation sessions. Basically, students were asked to grade and comment on oral presentations by other students in a pre-defined manner. We conducted two trials in consecutive years to evaluate the effectiveness of the scheme with different formats. Our data and questionnaire results showed that, with careful design of the format, peer evaluation indeed improved students’ participation in oral presentations, naturally leading to more serious and positive learning attitude, and eventually a better learning outcome.

In this research, we found that the peer evaluation format in oral presentation has a significant influence too. To give some reference points to the students, a predetermined distribution of scores was imposed on students’ grading. This may increase the level of difficulty of grading at the same time because all presentations must be heard before scores can be given, and students were forced to make strict judgments by picking out the best presentations and worst presentations. The professor’s grading did not have to strictly follow the predetermined distribution of scores to account for a number of equally good or equally bad presentations.

The proper number of student judges was also a concern. In Trial II described in this paper, only 1/4 of the students were assigned to be students judges in oral presentations. These students seemed to have a greater sense of responsibility than in Trial I-B, when all students could give scores. As a result, student judges exercised their professional judgment more carefully, and the scores given by students had a higher correlation with that given by the professor and lower coefficients of variation.

Another side effect is that, with fewer student judges, we were able to arrange a short Q&A session following each oral presentation. In the questionnaire, students in Trial II (with the Q&A session) expressed significantly stronger agreement that making oral presentations helps them improve their ability to express, communicate, and cooperate with other people.

The peer evaluation process described in this paper naturally leads to concerns of the subtle issue of fairness in grading. Our questionnaire results showed slight improvement in “perceived fairness” after the peer evaluation was implemented. Further examination of the scoring data reveals that students’ grading may balance the possible subjectivity of the professor’s judgment.

The peer evaluation scheme presented in this paper may not be innovative, as many engineering design classes might have implemented peer review process in oral presentations. However, the data obtained in this research provides solid evidence that implementing peer evaluation indeed improve students’ participation in oral presentations. Though the motivation of this research was for our large mechanical design class, the authors believe that the experience and findings in this research can be also useful for other engineering design courses planning to implement peer evaluation.

References

[1]     P. C. Blumenfeld, E. Soloway; R. W. Marx, J. S. Krajcik, M. Guzdial, A. Palincsar. Motivating Project-Based Learning: Sustaining the Doing, Supporting the Learning. Educational Psychologist, 26/3 (1991), 369-398.

[2]     C. L. Dym, A. M. Agogino, O. Eris, D. D. Frey, L. J. Leifer, Engineering design thinking, teaching, and learning. Journal of Engineering Education, 94/1 (2005), 103-120.

[3]     C. J., Atman, R. S. Adams, M. E. Cardella, J. Turns, S. Mosborq, J. Sallem. Engineering design processes: A comparison of student and expert practitioners. Journal of Engineering Education, 96/4 (2007), 359-379.

[4]     Y. L. Hsu, C. Y. Yo, C. Y., 2003. The problem-solving approach for the fundamental hands-on practice courses in mechanical engineering education. Journal of the Chinese Society of Mechanical Engineers, 24/5 (2003), 517-524.

[5]     C. E. Wales, R.A. Stagers, T.R. Long, Guided Engineering Design, West Publishing Company, St. Paul, MN, 1974.

[6]     S. D. Carr, E. D. Herman, S. Z. Keldsen, J. G. Miller, P. A. Wakefield, The Team Learning Assistant Workbook, McGraw Hill, N.Y., 2005.

[7]     Y. Rafiq. Peer assessment of group projects in civil engineering. Assessment & Evaluation in Higher Education, 21/1 (1996), 69-81.

[8]     N. van Hattum-Janssen. Explicitness of criteria in peer assessment processes for first-year engineering students. European Journal of Engineering Education, 31/6 (2006), 683-691.

[9]     L. M. O'moore. Peer Assessment Learning Sessions (PALS): an innovative feedback technique for large engineering classes. European Journal of Engineering Education, 31/1 (2007), 43-55.

[10]  D. Boydell. The use of peer group review in the assessment of project work in higher education. Mentoring & Tutoring: Partnership in Learning, 12/2 (1994), 45-52.

[11]  M. C. Hersam, M. Luna, G. Light. Implementation of interdisciplinary group learning and peer assessment in a nanotechnology engineering course. Journal of Engineering Education, 93/1 (2004), 49-57.

[12]  W. Y. Poona, C. McNaughtb, P. Lamb, H. S. Kwanc. Improving assessment methods in university science education with negotiated self- and peer-assessment. Assessment in Education: Principles, Policy & Practice, 16/3 (2009), 331-346.

[13]  B. C. William, B. B. He, D. F. Elqer, B. E. Schemacher. Peer evaluation as a motivator for improved team performance in Bio/Ag engineering design classes. Engineering Education, 23/4 (2007), 698-704.

[14]  D. M. A. Sluijsmans, S. Brand-Gruwel, J. J. G. van Merriënboer. Peer assessment training in teacher education: effects on performance and perceptions. Assessment & Evaluation in Higher Education, 27/5 (2002), 443-454.

[15]  D. M. A. Sluijsmans, S. Brand-Gruwel, J. J. G. van Merriënboer, T. J. Bastiaens. The training of peer assessment skills to promote the development of reflection skills in teacher education. Studies in Educational Evaluation, 29/1 (2003), 23-42.

[16]  J. Schönrock-Adema, M. Heijne-Penninga, M. A. J. Van Duijn, J. Geertsma, J. Cohen-Schotanus. Assessment of professional behaviour in undergraduate medical education: peer assessment enhances performance. Medical Education, 41 (2007), 836–842.

[17]  R. Ladyshewsky, E. Gotjamanos. Communication skill development in health professional education: the use of standardised patients in combination with a peer assessment strategy. J Allied Health, 26/4 (1997), 177-86.

[18]  J. H. Kelmar. Peer assessment in graduate management education. International Journal of Educational Management, 7/2 (1993), 4-7.

[19]  N. Falchikov and J. Goldfinch. Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks. Review of Educational Research. 70/3 (2000), 287-322.