//Logo Image
Authors: Yeh-Liang Hsu, Che-Chang Yang (2010-04-30)recommend: Yeh-Liang Hsu (2010-08-24).
Note: This paper is presented at the ASME 2010 International Design Engineering Technical Conferences (IDETC), Montreal, Quebec, Canada, August 2010.

Improving student participation in oral presentation of a project-based engineering design course in a large-class setting

Abstract

Arranging oral presentations has always been very difficult and ineffective in our large project-based engineering design class of about 110 students each year. This study explores a scheme of implementing peer evaluation to improve students’ participation and learning outcome in the oral presentation sessions. Basically, students were asked to grade and comment on oral presentations by other students in a pre-defined manner. The effect of oral presentations with/without the intervention of peer evaluation was compared. Our data and questionnaire results showed, with careful design of the format, peer evaluation indeed improved students’ participation in oral presentations, naturally leading to more serious and positive learning attitude, and eventually a better learning outcome.

Keyword: project-based learning, guided design, peer evaluation, oral presentation

1.         Introduction

Engineering design is a practical problem-solving profession and a critical element in engineering education. As well as gaining domain knowledge and the ability to use engineering tools, the training to deal with real engineering problems is an important learning process for engineering students [Atman et al., 2007]. In engineering design courses, design projects are commonly adopted as a means for students to “experience design”. Students can learn more from working on design projects than sitting in lectures, and therefore pedagogical approaches such as project-based learning or problem-based learning have been developed in many engineering design courses [Byqstad et al., 2009]. Project-based learning also provides opportunities for interdisciplinary learning, and has the potential to enhance student participation, motivation, and learning effectiveness [Jou et al., 2008].

The author has been teaching a junior level mechanical design course in a large class setting (about 110 students each year) for the past 18 years. Though the class is large, we still try to incorporate project-based learning in this course. Students were grouped into about 30 teams to work on their design projects. For example, the design project for the first semester of the 2008 academic year asked the students to design a ping-pong ball shooting robot to participate in a robot tic-tac-toe competition at the end of the semester. One problem the authors have to deal with in this large class setting is the professor and TAs do not have enough time to individually coach the 30 design teams. Therefore, an adapted “guided design procedure” is built into this mechanical design course [Hsu et al., 2003]. Guided design is a structured way of having students work through case studies [Wales et al., 1974]. The adapted guided design procedure in our mechanical design course attempts to lead the students to advance their design projects step by step through a specific problem-solving or design procedure.

Oral presentation is good training for students to communicate their design concepts, and can also be used to assess the students’ performances in design projects [Khandaker et al., 2008]. Students can also exchange design information, experiences, and ideas with one another over the presentation sessions. In our large engineering design class, we also used oral presentation for the design project management. The term project is divided into 4 step-by-step small projects in our adapted guided design procedure. Oral presentations of the 4 small projects are used as design reviews of the term design project for each design team. It is crucial the students receive evaluations and useful comments from the professor, TAs, and their fellow students following the oral presentation of each small project, so they can stay on the right track and keep up with the schedule of the term design project.

However, arranging oral presentations can be very difficult and ineffective in a large engineering design class such as ours. The practical problems encountered in our experience in the past years are as follows:

Ÿ   We had to invest a large portion of class time in the oral presentations of 30 teams, but students’ participation in the oral presentation sessions was low. Most students focused on their own presentations and did not pay attention to other teams’ presentations. The learning effect from listening and interacting with other students in the presentation sessions was much lower than expected.

Ÿ   Instead of speaking to the general audience, students often prepare their oral presentations specifically for the professor who does the grading. Very often, the professor was the only intended audience for the students when making presentations. Thus, the learning effect from preparing and making the presentations is also limited.

Ÿ   Students complain the grading by the professor alone is subjective and unfair.

In a problem-solving process, there might be multiple potential solutions or alternatives for a design. Emerging pedagogical approaches such as cooperative learning or peer assessment/evaluation are well suited to engineering design courses [Atman et al., 2007]. Peer evaluation has been documented as an important element in project teamwork [Carr et al., 2005]. It was reported students welcomed structured feedback from peer-evaluation for a project, and the findings by William et al. [2007] also suggested professors should structure peer-feedback during a project with peer-evaluation at the end of a project.

To improve students’ participation and learning outcome in the oral presentation sessions, this study explores a scheme of implementing peer evaluation in oral presentations in our large engineering design class. Basically, students were asked to grade and comment on oral presentations by other students in a pre-defined manner. Section 2 of this paper describes our scheme for implementing peer evaluation in oral presentations, and the concerns we had asking students to perform the grading. To answer those concerns, an experiment was designed to explore the effects and proper formats of peer evaluation. The grading by the students and questionnaire results were analyzed in Section 3. Finally Section 4 concludes our findings in this study.

2.         Implementing peer evaluation in oral presentations

As described in the previous section, the term design project (such as the design of a ping-pong ball shooting robot) is divided into 4 step-by-step small projects in the adapted guided design procedure for our large engineering design class. Oral presentations of the 4 small projects are used as design reviews of the term design project for each design team of 3 to 4 students. Each student is required to make at least one oral presentation during the semester. Details about the format of the oral presentations are as follows:

Ÿ   The oral presentation given by each team is limited to 5 minutes. As well as saving class time for 30 team presentations, this requirement also intends to train the students to present their work more concisely and with better structure.

Ÿ   Students are asked to grade the oral presentations by other teams. The students’ grading has a 50% impact on the final scores of the oral presentations, and the other 50% is given by the professor. In addition, the professor gives written comments to each team on their design work and oral presentation. The students can also provide their written opinions and responses to the presentations.

Ÿ   The presentation scores range from 6 to 10 points, and the distribution of the scores of all the teams in the class is pre-determined in the scoring sheet. Take a 15-team session as an example, the students are allowed to give one 6-point, three 7-points, five 8-points, four 9-points and two 10-points to the 15 teams.

About 1/3 of the students’ final grade of the semester depends on their performance in the oral presentations. The student grading process described above naturally leads to some concerns of unfairness, real or perceived, in grading. Students may not have adequate experience or reference points with which to assess the quality of the projects. This was why a predetermined distribution of scores was imposed to students’ grading. However, the professor’s grading did not have to strictly follow the predetermined distribution of scores to account for a number of equally good or equally bad presentations.

When implementing this new scheme in the class, we did have several questions in mind:

(1)       Is 5 minutes enough for an oral presentation?

(2)       Do the students have sufficient ability to make professional judgments?

(3)       Does this scheme enhance student participation and the learning outcome?

We conducted two trials in consecutive years, in which years students were asked to do a similar ping-pong ball shooting robot project, to evaluate the effectiveness of the new scheme for implementing peer evaluation in oral presentations. In the first year (Trial I), all student teams were divided into two sessions (Trial I-A and Trial I-B) for oral presentations. In Trial I-A, the grades of the oral presentations were given by the professor only. Peer evaluation was implemented in Trial I-B, in which all students had to give grades to all 4 presentations made by other teams.

The second trial (Trial II) was conducted the following year. The number of student judges in an oral presentation was reduced to 1/4. Only one student from each team was asked to give grades. The 4 students in one team took turns to be the student judge, and every student had at least one chance throughout the 4 oral presentations in the semester. With fewer student judges, we were able to implement a short Q & A, in which each team was questioned by one student judge following the 5-minute presentation.

In this experiment, the grades given by both the professor and the students from Trial I-A and Trial II were compared to examine how much the students’ grading would conform to the professor’s grading. The coefficient of variation of students’ grading was also calculated and compared to evaluate the consistency of students’ grading.

At the end of the semesters following the 4 oral presentations, all students were asked to fill out a questionnaire which had not been announced in advance. The questionnaire was anonymous and therefore the students understood their answers and comments would not affect their final grades for this course. The questionnaire was intended to evaluate the level of students’ participation, attitude, “perceived fairness” of the grading process, and the learning outcome of the oral presentations and the mechanical design course in general, before and after peer evaluation of oral presentations was implemented.

3.         Analysis of Experiment Data

In this Section, data collected from Trial I and Trial II were analyzed to answer the questions raised in the previous section.

(1)      Is 5 minutes enough for an oral presentation?

To answer this question, we examined the number of slides used in the presentations and the actual length of the presentations. In Trial I and II, the number of slides in student presentations ranged from 7 to 22 pages, and the average was 11.0 pages. Though this average number of slides seems to be large for a 5-minute presentation, most teams were able to complete their oral presentations on schedule. The average length of the presentations was 4 minutes and 40 seconds.

(2)      Do the students have sufficient ability to make professional judgments?

Here, we assumed the professor’s judgment is “professional” and compared the difference between the scores given by the professor and those given by the students. Table 1 shows the results from the four oral presentations (P1 to P4) from Trial I-B and Trial II. The average of the correlation coefficient between the scores given by the professor and those given by the students in the 4 oral presentations is 0.57 in Trial I. An even stronger consistency with a correlation coefficient of 0.71 is obtained in Trial II, when the number of student judges was reduced to 1/4. The average difference between the scores by the professor and the average scores by the students is 0.81 in Trial I-B and 0.72 in Trial II.

These results indicate a large positive correlation between the professor’s and the students’ judgments, even though the professor’s grading did not have to strictly follow the predetermined distribution of scores. Further examination of the data reveals, most of the larger differences occurred when the professor gave an extreme score (6 or 10) to a team. Students’ grading may have a balancing effect on the possible subjectivity of the professor’s judgment.

Table 1. The correlation and average difference between the professor’s and the students’ scores

Oral presentation

 

P1

P2

P3

P4

Average

Trial I-B

correlation coefficient

0.68

0.50

0.56

0.55

0.57

difference

0.78

0.73

0.85

0.89

0.81

Trial II

correlation coefficient

0.79

0.76

0.60

0.70

0.71

difference

0.92

0.70

0.64

0.64

0.72

To further explore the consistency of students’ grading, Table 2 shows the comparison of coefficients of variation (standard deviation / average). The students’ grading in Trial II seems to be more consistent than in Trial I-B. In both trials, the coefficients of variation are far below the simulated scores by a computer program, which were generated randomly but follow the grading rules as introduced in the previous session.

Table 2. Comparison of average coefficients of variation

Scores samples

Average coefficients of variation (%)

Random

20.6

Trial I-B

13.1

Trial II

11.0

(3)      Does this scheme enhance student participation and the learning outcome of the presentation sessions?

Table 3 shows the questions and results of the questionnaire given to the students at the end of the semester in Trial I and Trial II. The students answered those questions by indicating the level of agreement in the following statements where 2=strongly agree, 1=agree, 0=neither disagree nor agree, -1=disagree, -2= strongly disagree.

The questions in the questionnaire were organized into 4 groups. Group 1 (Question 1~4) asked students for a self-evaluation of class participation. The results reveal the students in Trial I–A, in which there was no peer evaluation, had the lowest class participation in their self-evaluation. Group 2 (Question 5~12) asked students about their attitude towards the oral presentation. With the peer evaluation scheme, students in Trial I-B and Trial II had a more serious and positive attitude than students in Trial I-A. They considered the response from other students when making the presentation (Question 6), and they seemed to pay more attention to the presentations by other teams (Question 9 and 10). Note, students in Trial II cared about their grades and comments a lot more than students in Trial I (Question 11 and 12).

Group 3 (Question 13~17) asked students about their feeling towards the grading. The “perceived fairness” towards the rules of grading improved slightly in Trial II (Question 13 and 14), though students in Trial I-B had stronger agreement on whether the grading is objective (Question 17). Group 4 (Question 18~24) asked students whether they had learned from making oral presentations. Again with the peer evaluation scheme, students in Trial I-B and Trial II were more positive about the learning outcome of making an oral presentation. Note, in Trial II, students had a significantly stronger agreement that their ability to express, communicate, and cooperate with other people improved because of the oral presentation.

Table 3. The results of the questionnaire

No.

Question

Trial I

Trial II

A

B

1

How much time did you usually spend on each design project? [1]

0.00

-0.12

0.53

2

How much did you participate in the design projects? [2]

0.91

1.08

1.11

3

Your attendance in the class? [3]

0.97

1.34

1.29

4

How many times did you attend the oral presentations?

4

85%

90%

81%

3

8%

4%

10%

2

0%

0%

1%

1

7%

6%

8%

5

I tried to satisfy the professor’s requirement when preparing oral presentations.

0.88

1.34

1.16

6

I considered the responses from other students when preparing oral presentations.

0.50

1.00

1.15

7

I was interested in other oral presentations, and I listened to them carefully.

0.90

0.95

1.08

8

The other classmates were interested in the oral presentation by your team.

0.31

0.60

0.81

9

I learned to do better design from listening to the oral presentations of other teams.

0.90

1.44

1.45

10

I can improve my oral presentation skills from listening to other presentations.

0.78

1.10

1.32

11

I cared about the grades of my team after project presentation.

0.77

1.06

1.44

12

I cared about the comments given by the professor following the project presentation

1.02

0.88

1.43

13

I feel it is fair to have a pre-determined distribution of scores for the presentations.

0.40

0.15

0.62

14

I feel it is fair to allow student to grade others.

0.56

0.45

0.92

15

I feel the grading of the oral presentations emphasized the design content.

0.46

0.65

0.69

16

I feel the grading of the oral presentation emphasized oral presentation skills.

0.46

0.65

0.90

17

I feel the grading for oral presentation is objective.

0.16

0.94

0.55

18

The comments given by the professor helped me understand the pros and cons of my team.

0.77

0.91

1.21

19

The grades of the oral presentation of my team met my expectation.

0.32

0.53

0.69

20

I am more serious about the design project because of the oral presentations.

0.62

1.26

1.23

21

I understand the professional knowledge better because of the oral presentations.

0.72

1.44

1.25

22

The oral presentations helped me enhance my ability to express and communicate.

0.78

0.94

1.25

23

The oral presentations helped me improve my ability to cooperate with other people.

0.74

0.90

1.25

24

I would rather make oral presentations than take exams.

0.77

1.07

0.92

4.         Discussion and Conclusions

Oral presentation is an essential element of engineering design courses. However, arranging oral presentations has been very difficult and ineffective in our large project-based engineering design class of about 110 students each year. This study explores a scheme of implementing peer evaluation to improve students’ participation and learning outcome in the oral presentation sessions. Basically, students were asked to grade and comment on oral presentations by other students in a pre-defined manner. We conducted two trials in consecutive years to evaluate the effectiveness of the scheme with different formats. Our data and questionnaire results showed, with careful design of the format, peer evaluation indeed improved students’ participation in oral presentations, naturally leading to more serious and positive learning attitude, and eventually a better learning outcome.

In this research, we found the peer evaluation format in oral presentation has a significant influence too. To give some reference points to the students, a predetermined distribution of scores was imposed on students’ grading. This may increase the level of difficulty of grading at the same time because all presentations must be heard before scores can be given, and students were forced to make strict judgments by picking out the best presentations and worst presentations. The professor’s grading did not have to strictly follow the predetermined distribution of scores to account for a number of equally good or equally bad presentations.

The proper number of student judges was also a concern. In Trial II described in this paper, only 1/4 of the students were assigned to be students judges in oral presentations. These students seemed to have a greater sense of responsibility than in Trial I-B, when all students could give grades. As a result, student judges exercised their professional judgment more carefully, and the scores given by students had a higher correlation with that given by the professor and lower coefficients of variation.

Another side effect is, with fewer student judges, we were able to arrange a short Q&A session following each oral presentation. In the questionnaire, students in Trial II expressed significantly stronger agreement that making oral presentations helps them improve their ability to express, communicate, and cooperate with other people.

The student grading process described in this paper naturally leads to concerns of the subtle issue of fairness in grading. Our questionnaire results showed slight improvement in “perceived fairness” after the peer evaluation was implemented. Further examination of the scoring data reveals students’ grading may balance the possible subjectivity of the professor’s judgment.

Though the motivation of this scheme was for our large engineering design course, the authors felt the experience and findings in this research can be also useful for other engineering design courses planning to implement peer evaluation in oral presentations.

References

Atman, C. J., Adams, R. S., Cardella, M.E., Turns, J., Mosborq, S., Sallem, J., 2007. “Engineering design processed: A comparison of student and expert practitioners,” Journal of Engineering Education, vol. 96, no. 4, p. 359-379.

Byqstad, B., Kroqstie, B. R., Gronli, T. M., 2009 “Learning from achievement: Scaffolding student projects in software engineering,” International Journal of Networking and Virtual Organization, vol. 6, no. 2, p. 109-122.

Jou, M., Wu, M. J., Wu, D. W., 2008. “Development of online inquiry environments to support project-based learning of robotics,” Proc. of 1st World Summit on the Knowledge Society, WSKS 2008, p. 341-353.

Hsu, Y. L., Yo, C. Y., 2003. “The problem-solving approach for the fundamental hands-on practice courses in mechanical engineering education,” Journal of the Chinese Society of Mechanical Engineers, vol. 24, no. 5, p. 517-524.

Wales, C.E., R.A. Stagers, and T.R. Long, Guided Engineering Design, West Publishing Company, St. Paul, MN, 1974.

Khandaker, M., Orono, P., Ekwaro-Osire, P., 2008. “Undergraduate engineering team project: Is there any correlation between presentation and participation?” Proceedings of ASEE Annual Conference and Exposition 2008.

Carr, S. D., Herman, E. D., Keldsen, S. Z., Miller, J. G., Wakefield P. A. The team learning assistant workbook, McGraw Hill, N.Y., 2005.

William, B. C., He, B. B., Elqer, D. F., Schemacher, B. E., 2007. “Peer evaluation as a motivator for improved team performance in Bio/Ag engineering design classes,” Engineering Education, vol. 23, no. 4, p. 698-704.