Friday, September 18, 2015

Who?

Part of our Teachers strike one of the issues was the evaluations tied to test scores of the students. Aside from asinine there is little to gain from the idea that you can tell how good a Teacher is by the scores the students have on tests.

 There are many intrinsic and extrinsic factors to testing. The tests are not written by Teachers but by professional testing organizations, the material is not always fully covered in any class at any time as there can be many extraneous other factors that have led Teachers to amend or alter the curriculum. Those can include Special Education students, English Language Learners, Discipline issues, attendance issues, and just plain daily lessons that often go off the expected track as one door opens another that needs to be addressed as well.

Classrooms are not laboratories of clear controlled variables, there are independent variables that affect the dependent ones and that is the outcome called learning So when I found out that the metrics of this system of evaluation being touted was created by a consultant in Education I wondered who it was. Apparently no one knows her, seems to have met her but Googling her picture she looked like the new friends Caitlyn Jenner met during her transition.

 And then I saw ETS the test indoctrination center that controls almost all of the testing regimens that exist in America. They are utterly obtuse, secretive and bizarre. When you receive your test scores - weeks later - as they have some convoluted method in which scoring is delayed until the testing period has ended, then you receive the score two weeks late , with no copies of the test, only the number of  questions you missed but no idea which or how or why the scores given were attained or devised. So a short answer question you can fail without knowing exactly what was wrong with the answer. You are told to answer every question even if you don't know and then later metrics are apparently done  without explanation as to how those too are calculated or devised. It is utterly lacking in transparency and vague on how assessments are made let alone who is making them.

 In a real classroom situation you can scantron a test and have the results in minutes, any test online the results are immediate and any short answer questions or essay responses have clearly defined rubrics that allow for differentiation of a response as long as it falls in a parameter that again is fully explained in advance.  Imagine saying to kids: "Well the results will be sent to you in two weeks and that is all you will need to know."  Oh wait they do that now with the standardized testing. See if that works in a classroom.

And any Teacher scoring said test would know the student and allow for any variables that could contribute to the response and assess appropriately.  The student could be ELL, SPED or anything else that allows for the old grade on a curve.   But we already know that many of the Common Core exams were not corrected by Teachers and I assume my most recent Praxis exam was not either so how the scores were devised are unclear. For the record I passed the exam but failed the short responses about classroom management, something of which I have been doing for 20 some years.  I must be doing it wrong then. But again I have no idea.

So who is this person and what does she know about a one size fits all concept when it comes to Teaching?  

Who Is Charlotte Danielson and Why Does She Decide How Teachers Are Evaluated?

 The Huffington Post
 Alan Singer Social studies educator, Hofstra University
Posted: 06/10/2013

 A New York Times editorial endorsed the state imposed teacher evaluation system for New York City as "an important and necessary step toward carrying out the rigorous new Common Core education reforms."

The system is based on the Danielson Framework for Teaching developed by Charlotte Danielson and marketed by the Danielson Group of Princeton, New Jersey. Michael Mulgrew, the president of the city's teachers union, and Mayor Michael Bloomberg, also announced that they are generally pleased with the plan. According to the Mayor, "Good teachers will become better ones and ineffective teachers can be removed from the classroom." He applauded State Commissioner John King for "putting our students first and creating a system that will allow our schools to continue improving."

 Unfortunately, nobody, not the Times, the New York State Education Department, the New York City Department of Education, nor the teachers' union have demonstrated any positive correlation between teacher assessments based on the Danielson rubrics, good teaching, and the implementation of new higher academic standards for students under Common Core. A case demonstrating the relationship could have been made, if it actually exists. A format based on the Danielson rubrics is already being used to evaluate teachers in at least thirty-three struggling schools in New York City and by one of the supervising networks. Kentucky has been using an adapted version of Danielson's Framework for Teaching to evaluate teachers since 2011 and according to the New Jersey Department of Education, sixty percent of nearly 500 school districts in the state are using teacher evaluation models developed by the Danielson Group. The South Orange/Maplewood and Cherry Hill, New Jersey schools have used the Danielson model for several years. According to the Times editorial, the "new evaluation system could make it easier to fire markedly poor performers" and help "the great majority of teachers become better at their jobs."

 But as far as I can tell, the new evaluation system is mostly a weapon to harass teachers and force them to follow dubious scripted lessons. Ironically, in a pretty comprehensive search on the Internet, I have had difficulty discovering who Charlotte Danielson really is and what her qualifications are for developing a teacher evaluation system.

 According to the website of the Danielson Group, "the Group consists of consultants of the highest caliber, talent, and experience in educational practice, leadership, and research." It provides "a wide array of professional development and consulting services to clients across the United States and abroad" and is "the only organization approved by Charlotte Danielson to provide training and consultation around the Framework for Teaching." The group's services come at a cost, which is not a surprise, although you have to apply for their services to get an actual price quote. Individuals who participated in a three-day workshop at the King of Prussia campus of Arcadia University in Pennsylvania paid $599 each. A companion four-week online class cost $1,809 per person.

According to a comparison chart prepared by the Alaska Department of Education, the "Danielson Group uses 'bundled' pricing that is inclusive of the consultant's daily rate, hotel and airfare. The current fee structure is $4,000 per consultant/per day when three or more consecutive days of training are scheduled. One and two-day rates are $4,500/per consultant/per day. We will also schedule keynote presentations for large groups when feasible.

A keynote presentations is for informational/ overview purposes and does not constitute training in the Framework for Teaching." Charlotte Danielson is supposed to be "an internationally-recognized expert in the area of teacher effectiveness, specializing in the design of teacher evaluation systems that, while ensuring teacher quality, also promote professional learning" who "advises State Education Departments and National Ministries and Departments of Education, both in the United States and overseas."

 Her online biography claims that she has "taught at all levels, from kindergarten through college, and has worked as an administrator, a curriculum director, and a staff developer" and to have degrees from Cornell, Oxford and Rutgers, but I can find no formal academic resume online. Her undergraduate degree seems to have been in history with a specialization in Chinese history and she studied philosophy, politics and economics at Oxford and educational administration and supervision at Rutgers. While working as an economist in Washington, D.C., Danielson obtained her teaching credentials and began work in her neighborhood elementary school, but it is not clear in what capacity or for how long. She developed her ideas for teacher evaluation while working at the Educational Testing Service (ETS) and since 1996 has published a series of books and articles with ASCD (the Association for Supervision and Curriculum Development).  ***(For the record the woman to have done all that by 1996 would have to be well into her 70s today see below) ***

I have seen photographs and video broadcasts online, but I am still not convinced she really exists as more than a front for the Danielson Group that is selling its teacher evaluation product. The United Federation of Teachers and the online news journal Gotham Schools both asked a person purporting to be Charlotte Danielson to evaluate the initial Danielson rubrics being used in New York City schools. In a phone interview reported on in Gotham Schools, Danielson was supposedly in Chile selling her frameworks to the Chilean government, "Danielson was hesitant to insert herself into an union-district battle, but did confirm that she disapproved of the checklist shown to her." The checklist "was inappropriate because of the way it was filled out. It indicated that the observer had already begun evaluating a teacher while in the classroom observation. She said that's a fundamental no-no." Bottom line is that 40% of a teacher's evaluation will be based on student test scores on standardized and local exams and 60% on in-class observations.

 In this post I am most concerned with the legitimacy of the proposed system of observations that are based on snap-shots, fifteen minute visits to partial lessons, conducted by supervisors potentially with limited or no classroom experience in the subject being observed, followed by submission of a multiple-choice rubric that will be evaluated online by an algorithm that decides whether the lesson was satisfactory or not.

 Imagine an experienced surgeon in the middle of a delicate six-hour procedure where the surgeon responds to a series of unexpected emergencies being evaluated by a computer based on data gathered from a fifteen-minute snapshot visit by a general practitioner who has never performed an operation. Imagine evaluating a baseball player who goes three for four with a couple of home runs and five or six runs batted in based on the one time during the game when he struck out badly. Imagine a driver with a clean record for thirty years who has his or her license suspended because a car they owned was photographed going through a red light, when perhaps there was an emergency, perhaps he or she was not even driving the car, or perhaps there was a mechanical glitch with the light, camera, or computer.

 Now imagine a teacher who adjusts instruction because of important questions introduced by students who is told the lesson is unsatisfactory because it did not follow the prescribed scripted lesson plan and because during the fifteen minutes the observer was in the room they failed to see what they were looking for but what might have actually happened before they arrived or after they left.

When I was a new high school teacher in the 1970s, I was observed six times a year by my department chair, an experienced teacher and supervisor with expertise in my content area. We met before each lesson to strengthen the lesson plan and in a post-observation conference to analyze what had happened and what could have been done better. Based on the conferences and observations we put together a plan to strengthen my teaching, changes the supervisor expected to see implemented in future lessons.

 The conferences, the lesson, and the plan were then written into a multi-page observation report that we both signed. These meetings and observations were especially important in my development as a teacher and I follow the same format when I observe student teachers today. As I became more experienced the number of formal observations decreased. I still remember a post-observation conference at a different school and with a different supervisor who had become both a mentor and a friend. After one lesson he virtually waxed poetic at what he had seen, but then suggested three alternative scenarios I could have pursed. Finally I said I appreciated his support and insight, but if I had done these other things, I would not have been able to do the things he really liked. He paused, said I was right and said to just forget his suggestions.

 But under the new system, principals will drop in for a few minutes and punch in some numbers. Teachers then will be rated, mysteriously or miraculously, based upon a computer algorithm using twenty-two different dimensions of teaching. Astounding!

And this assumes principals know what they are doing, have the independence to actually give teachers a strong rating, and are not out to get the good teacher who is also a union representative or just a general pain in the ass like I was. But that is a big assumption. Teachers in the field report to me that the New York City Department of Education is already trying to undermine the possibility of a fair and effective teacher evaluation system. I cannot use their names or mention their schools because they fear retaliation.

 I urge teachers to use Huffington Post to document what is going on with teacher evaluations in their schools. Within hours after an arbitrator mandated use of the Danielson teacher evaluation system, New York City school administrators received a 240-page booklet explaining how to implement the rubrics next fall. Teachers will receive six hours of professional development so they know what to expect, not so they know how to be successful. Teachers are being told that while there is no official lesson plan design, they better follow the recommended one if they expect to pass the evaluations.

 Administrators are instructed how to race in and out of rooms and punch codes into an IPad with evaluations actually completed in cyberspace by an algorithm. Teachers will fail when supervisors do not see things that took place before or after they entered the room, if lesson plans do not touch on all twenty-two dimensions, or when teachers adjust their lessons to take into account student responses. Teachers expect to be evaluated harshly. In December, 2012 the New York Daily News reported that the Danielson rubric, while still unofficial, was being used to rate teachers unsatisfactory. This year there also appears to be an informal quota system for the granting of tenure. Teachers recommended for tenure by building administrators are being denied by central administration, which suggests how low the opinions of building based administrators are valued.

As I have written repeatedly in other posts, there are useful educational goals established by the Common Core standards. But unless the standards are separated from the high-stakes testing of students and the evaluation of teachers and schools they will become an albatross around the neck of education and a legitimate target for outrage from rightwing state governments, frustrated parents, and furious teachers, and they will never be achieved.




****This is from the Danielson website and the dates and lack of details start to become how one would say vague and very unspecific about where all this happened.  ***


Danielson traveled a crooked road to get where she is today. Born in West Virginia, her family moved to Princeton during high school. She graduated from Cornell with a degree in history – specializing in Chinese history, actually – and then went to Oxford University to earn her master’s in philosophy, politics and economics. Twelve years later, in 1978, she earned another master’s from Rutgers in educational administration and supervision.   *** That means she earned her FIRST master 1966 so that would mean she earned a BA in 1964 at age 21 which puts her birth at 1943. So she is in her dotage and has accomplished quite a bit in her years and yet no actual dates, places and how she did so. Why is that? ****

After college, she worked as a junior economist in think tanks and policy organizations. While working in Washington, D.C., she got to know some of the children living on her inner-city block – and that’s what motivated her to choose teaching over economics. She obtained her teaching credentials and began work in her neighborhood elementary school.

She and her husband moved to New Jersey, where she worked her way up the spectrum from teacher to curriculum director, then on to staff developer and program designer in several different locations, including ETS in Princeton, and a developer and trainer for teacher observation and assessments. Those experiences shaped her vision of teacher evaluations.

The breakthrough for Danielson was her book, Enhancing Professional Practice: A Framework for Teaching, originally published in 1996. “Framework for Teaching,” as it’s often referred to, was one of several of her books published through the Association for Supervision & Curriculum Development.

No comments:

Post a Comment