On April 2nd, 2015 at Wake Tech’s North Campus, Ed Neal addressed the Wake Tech Faculty Professional Development Conference at 9:00am
Evaluating Student Learning: Faculty Professional Development Conference 2015
The hope of this lecture was to create usable training. When looking into your approaches to your classes, and how you evaluate your student learning, begin by evaluating your SLO’s.
- Which ones can be multiple choice questions?
- Which ones can be fill-in-the-blank questions?
- Which ones can require a rubric for open-ended work?
It is important to note that there is a distinct difference between assessment and evaluation.
Evaluation— placing a value
Assessment— Sitting beside the learner and making a determination
More assessments means less stress and better evaluations. Use follow-up questions to accurately assess student learning after lecture and readings. Discussions are great ways to provide students with asynchronus communication when assessing their learning.
If you are giving a grade for everything, students will become grade-oriented. That is, if the task is deemed to be worth doing, and the reward is deemed high enough.
Your students will want to know “How am I graded?”, “What kind of assessments will I receive?”. Be crystal clear. Evaluation is measurement. It should be valid (that is to say, it should test what you want it to test), and it should also be reliable (that is to say, it should be usable again and again without fail).
What are the Threats to Validity?
Threats to validity are often no mentioned. Specifically, cheating. What is it that you are measuring… if the students cheat. Its really a good question. Testing improves learning. Studies and testing clearly supports this.
We took a short test, a tauroscatalogical test.
Incorrect answers will stay with students if not corrected. They can stay with people for a lifetime if uncorrected. The best answer or solution to this is to discuss incorrect answers with students and why the proper answers are correct. Students in high school typically complete 200+ multiple choice test by the time they graduate. Typically, this encompasses 1 right answer, 1 wrong answer in an opposite fashion, and 2 items which are deemed “distractors”. Your students are GOOD at taking these test. You are very poor at writing them.
Case problems are typically the worst questions of all times. Teachers like them because they are ways of dealing with real-world problems. Students hate these, because they are difficult, and moreso than any other type of question they encounter. Dial back the number of these questions that you demand students answer. These are the “Money questions”.
Time to go
At this point, the discussions began to diverge from my needs. As a graphic and web design teacher, our examinations are based on real-world scenarios, multiple choice questions, and the best I can come up with in a constantly changing landscape of culture and technology. Frankly, if I can come up with 50-60 questions, I feel totally on top of my game.
The discussion broke down into item analysis, how to properly calculate the discrimination index with your questions using results from a scan-tron machine, and the mathematical calculations required to do this. I don’t use scan-tron forms, and I haven’t used one since I started working in education in 2005. All of my testing is blackboard-based and I couldn’t help with this or even pretend to understand how these mathematics could be carried out. The populations they discussed in the classes were in the 200+ range per section, and my largest class is 24 students per section, or up to 96 students per semester in all my courses.
We then broke down into small groups and discussed our individual SLOs and which could be broken down into what kinds of questions. Since in my courses we are generally discussing 4 SLOs, my portion was over quickly enough to focus on others for the remainder of the class. I felt this class was enjoyable, and I learned a great deal, however I could not participate as fully as others.
Ed Neal is a consultant with 34 years of experience in faculty development.