Sunday, April 19, 2015

Student Survey Questions: valid? reliable?


As the TKES year draws to a close, it appears "the system" has some significant flaws.  There are some good elements to the new system (the TAPS in general are a good start), but other aspects are woefully unscientific, unreliable, even unprofessional.  This post focuses on the Student Survey Questions of Instructional Practice for grades 6-8.
  1. It is possible that the student survey questions will not and can not lead to valid and reliable results (but we will use the results to evaluate teachers anyway).
  2. The questions themselves can not apply to all classes or all teachers or all grade levels.
  3. Most of the questions themselves are impossible to score a "strongly agree" due to the fact that a 12 year old is answering the question.  (I typically do not "strongly agree" or "strongly disagree" to anything.  Better choices would be: Most of the time, Usually, Sometimes, Not very often, etc.)
  4. Because the questions can not apply to each teacher, to determine teacher effectiveness, the teachers will be compared to other teachers in the school, the district, and the state (thus the "School, District, and State Mean" and "State Median" columns).  Meaning, our effectiveness is not being graded against a teaching standard (the survey question), but against other teachers.  (What if we graded students against each other and not the curriculum standard?  That would be nonsense.)
Question #6: "My teacher chooses activities and assignments based on what students need to learn."  Yes, all the time, during class, for homework, for the day, the week, nine weeks, the year, and for all three years in middle school.  What else do teachers do?  How could a student possibly answer anything other than "strongly agree"?  Because they do not know; they could not know.  Anyone without strong pedagogical training, knowledge, and practice is not even qualified to answer that question.  Teachers even collaborate across grade levels so that there is continuity of learning in middle school.  We are not having students fill out word search puzzles.  87% of my students answered positively to that; however, at least one student answered strongly disagree to all questions.

Question #7: "My teacher gives students as much individual attention as they need [emphasis added] to be successful."  Laughable survey question.  58% of my students answered positively to that.  Think of what a classroom would look like to have 90% of the students answer "strongly agree" to that (I can not even imagine); remember most classrooms are overcrowded....

Question #10:  "My teacher allows me to work with different groups of students depending on the activity we are doing."  Of course we do, any time, all the time, every time - depending on the activity that is assigned to accomplish the learning goal as determined by the teacher so that the students will learn the curriculum.  But, it does not mean everyday or whenever the students want to.  The long-believed, and growing perception that collaborative learning is as effective as direct instruction is wrong.  The conditions of effective collaborative learning to mean direct instruction are so high, no one meets them.  Stacks of high-quality research show that over and over again (I can send you sources if you want because it was an influence in my doctoral study).  I received 47% positive response on that question.  Again, anyone without strong pedagogical training, knowledge, and practice is not even qualified to answer that question.

It appears to me that if I want higher scores on my survey results, I will need to create (another) system for next year to teach middle school students not only the state curriculum of concepts and skills, but also how to identify and understand the Teacher Performance Standards, pedagogy on something they have no knowledge of (so I can get better survey results), and master the Student Learning Objectives.  If only I could see my students for a full class period each day.

Saturday, April 18, 2015

Student Surveys of Teacher Instructional Practices: absolutely laughable, if it weren't true


  1. The use of student surveys (for middle school: 11-13 year olds) to impact teacher evaluation, TEM score, effectiveness, retention, and certificate, is absolutely laughable - if it weren't true.  Because it is true, it is horrifying and ridiculous.  Now teachers need to teach the students how to evaluate their teacher; here's what summary looks like, here's what differentiation looks like, here's what asking questions looks like, etc.  More later.
  2. It seems reasonable that if a child - students - could take a survey that would impact a teachers' -professional educator's - job, why couldn't we create a survey of our students that could count toward their grade (i.e., a summative test grade).  More later.
  3. A friend of mine, who has a birds eye view of many things, has the opinion that this entire TKES process was developed to create an evaluation system which has the ability to create a sense of outrage and distrust in the Georgia public education system so that public sentiment would move toward supporting Georgia's elected politician's push toward more charter schools (which was just voted on and approved), allowing for schools, teachers, and benefits to move out of the GA DOE funding into privatized companies allowing schools to function outside of the GA DOE which moved from the lack of governmental funding public schools years ago which, in its fullness, will create a disparity of high functioning charter schools and low functioning public schools creating a reinvented system of segregation--this time, between the haves and the have nots.  TKES isn't about providing an improved valid teacher evaluation, it's about creating political movement toward charter schools.  The opinion is hard to argue against; time will tell.
  4. A teachers, as with all other evaluation systems, is in the hands of the interpreting evaluator.  A teacher with less than 5 years experience has seven IVs and three IIIs on his/her summative evaluation.  Another teacher with minimal experience has five IVs and five IIIs.  Another teacher with less than 5 years with experience who provides differentiated experiences for his/her students every day receives IIs.  One evaluator interprets by the letter of the rubric, another interprets: "if you could teach a class on this Standard, you should receive a IV."  ...completely different interpretations, different grades, and different summative outcomes for the teacher--connected to their certificate.  Word is that if you receive a II in any Standard, you may not be eligible for interview or hire in some counties.
  5. The survey questions do not/can not apply to all subject matters; this creates validity/reliability problems (yet, we are going to evaluate and judge teachers anyway.  See #3.)
For example:  Survey Question #4: "My teacher takes time each day to summarize what we have learned."  My score: 1.95 (17% strongly agree, 37% agree, 29% disagree, 15% strongly disagree).  Clearly 12-year-old students are clueless (because they are only 12), I summarize more than one time per class period, and now it seems teachers need to teach students how to answer survey questions (in lieu of time spent on standards) to keep their jobs - or at least avoid uncomfortable summative TKES conferences regarding their student survey results.  I graduated with High Honors (high school), cum laude (college), 3.8 (masters), and a 4.0 (doctorate) in my degrees and apply what I know, and constantly use formative assessments during class so my students will learn efficiently, and now I have to figure out a way for 12 year old students to recognize and approve of my teaching strategies and methods to validate my abilities.  Something seems out of place here (see #3).

So, 44% of my students do not think I summarize the concept each day.  Hummmm.  Let's see.  Suppose I teach music.  I teach a new note, I give the location on the staff, I give the name, I give the fingering, I let the students see it and finger it, I give how to play it, I give how it is used, I show them why the note is the way it is, I have the students play the note, I double check (through formative assessments) that the students understand and can perform the note, I have them play the note, and then I have the students play the note in new music.  If that is not summarizing of the new note, I do not know what is, but 44% of my students do not think so.  The same would go for rhythms, dribble a basketball, sing pitch, sew a straight line, or draw a perspective.  In many performance-based classes (Connections classes in particular), concept is about 20% and performance/skill is 80%; therefore, playing/singing/drawing/performing the new concept is summarizing.  So, if teachers are going to improve their student survey averages and TEM score in the coming years, it appears we will need to find new ways of teaching the standards, documenting performance for evaluators, and teaching students how to answer the survey.