Evaluation and Assessment: The Tail that Wags the Dog?

By Dr. Carol Fineberg, Evaluation & Assessment Consultant

As the pressure to prove the worth and utility of the arts -- music, visual  art and design, dance, theater, media --  in the education of children and youth in this country and others continues, it is clear that issues regarding evaluation in general and assessment practices in specific will persist.    This blog attempts to set light on some of these issues and advice regarding creating effective evaluation arrangements commensurate with the size of the program to be evaluated.

After many years of evaluating, assessing and writing reports, I needed a break and only recently have begun to take on new assignments.  Below is an accumulation of thoughts that should help arts organizations and educators who hire them wrestle with the evaluation dog so that they become better consumers of local and national efforts to validate the arts as an educational enterprise for students and their families.  CF

Arts organizations, schools, and funders all want to know, in varying degrees of urgency, whether the arts programs that they have supported are making a difference in teaching and learning the "basics" or in some cases the areas of advanced curriculum in the arts and academic subjects. 

In both the arts and education, the meaning of evaluation is essentially about making judgments regarding the effectiveness of an institution (an arts organization or a school, for example), or a program (a performed or written or visually rendered work of at least some complexity requiring a critical analysis OR a structured approach to learning that can be replicated completely or in parts). Evaluators ask such questions as: what are the various elements of the institution or program, who are the key decision makers, the receivers of service, what are the financial underpinnings, and how does the budget reflect the mission of the institution or program? There are many more questions that are asked, and an evaluation design is created to deal with the questions as posed.   

Evaluations are critical when connected to sources of funding and accreditation. Part of arts education evaluations address issues of impact on learning, and this requires the inclusion of rigorous assessments of student or teacher learning. 

Good evaluation designs generally include information gleaned from assessments of student achievement. But reports of assessment scores do not necessarily discuss the "inputs," the programmatic elements that are linked to achievement in some important way. Assessments these days are usually collections of test data that are then compared with other collections of data leading to interpretation of relationships between a "treatment" or a program on student or teacher learning behavior. Assessments generally lead to statistical interpretations of data for evidence of "cause and effect" as in how arts instruction "causes" an increase in reading scores (a practice I abhor! See some subsequent blog for a discussion of relating the arts and reading tests.)    Assessments are also used to substantiate claims that students have learned the basic techniques required in a particular art or music, dance or media class.  Assessments can also be used to help teachers understand the depth and breadth of learning in their classrooms as revealed by student behavior on given tasks or tests. 

OK. People may quibble with the definitions, but let's at least accept the above as the foundation for the rest of this particular blog.  

Here's what I have learned over the years while designing, implementing, and reading others' evaluations: 

 

Making artistic judgments:

Artists working in schools usually come with impressive artistic credentials. They frequently have a college degree in their area of concentration as in theatre, dance, visual arts, media, literature (creative writing as well as interpreting literature). They are used to being reviewed by their peers as well as those artists who have already carved a reputation for excellence. Aesthetic criticism is part of their world. They can make insightful observations about the artistic accomplishments of individual or groups of students. 

 

Making educational judgments:

Teachers, on the other hand, come with extensive training in the techniques of teaching and, for some, developing their own versions of "the curriculum" mandated by some higher authority. They are frequently armed with both undergraduate degrees in "content" such as history, English, Math or Science, and graduate degrees in "methods" of instruction and curriculum development. Much of their time in graduate school is focused on how children and youth learn, and the psychology of learning. They can make insightful comments on the learning behaviors of students and may also cite examples of when a student is observed applying to an academic class what has been learned as part of an arts session or total experience.  

 

Bringing the two together:

Some teachers and some artists have dual backgrounds in studio art and education psychology, the history of psychology and philosophies of the arts, as well as such concrete subjects as classroom management, testing and measurement, and higher levels of cognitive development. As a team, they can provide an evaluator with valuable insights about the interplay between creating and responding to the arts and their students' understanding and attitudes that govern their lived lives. 

 

Expecting the wrong things from the right people:

But artists are usually babes in the woods when it comes to assessing impact of their work on the students they interact with as resident artists. The notion of "evaluation" becomes unsettling and intrusive for many and at times can elicit real resentment. It is sufficiently challenging for them to organize a solo instructional act that requires managing a group of 25 or so kids with interactive "instruction" to fit into the usual 45 minute slot, and thinking in terms of OUTCOMES instead of process. Indeed, much of artists' work is deliberately not concerned with outcome and more concerned with processes that are generically open ended, marked by the phrase "what if …." rather than being concerned about the pedagogical process that anticipates the outcome and aims for it.    

 

Some educators -- teachers and administrators -- especially those with extensive backgrounds in research, can gather and interpret test and observation data and write about the impact of programs on students, making very sophisticated judgments about pedagogical issues. They can compare what they see with what research says about signposts of instructional impact. They can immediately tell whether all, some, or only a few students are engaged in the artistic tasks put before them. The more sophisticated can look at student work and compare it with national norms that are for creative expression at various ages and stages of child development. They can "play" with statistics to compare the performance of targeted children with control groups, and they can compare a cohort of students over time, tracking their individual and group progress. But they are not always prepared to make artistic judgments at a required level. This is especially true when they are not familiar with the ages and stages of children's creative work outside of their own specialty. Frequently an evaluator will enlist experts to supplement their own strength with others' expertise. 

 

Grass roots evaluators interested in assessing student progress can use the data collected by school districts to build a multi-dimensional picture of student progress and, in tandem with artistic partners can estimate the impact that the arts education programs have had in contributing to that progress. 

 

Many organizations with a primary mission to present a professional production, whether theatrical or visual or media-centric, and secondarily an arts education program -- think a local theater or dance company "in residence" or a museum or arts center that provides a series of visits and "hands on activities" -- are rarely able to devote time and effort to assessing their impact on students in any but the most general claims. They are in many ways MORE capable of making smart judgments regarding the integrity of the artistic pedagogy and the qualitative results of student work, but LESS capable when it comes to judging pedagogy and instructional outcomes tied to academic standards. 

 

Arts organizations that are specifically constituted to provide educational experiences (workshops, residencies, institutes, etc.) usually have an easier time of applying for and receiving funding to evaluate their work. 

 

Artists recognize artfulness, and this is important when it comes to judging students' creative work.  On the other hand, many artists are not schooled in educational terms and usage. The solution, in my mind, is to build an E&A design that includes roles for creative artists as well as educators in the process of acquiring and analyzing data in order to make conclusions about  and the effectiveness of the program on learning and the ultimate worth of the program. 

 

This brings us to the inevitable question about "the evaluator." Should there be one, or can an arts organization be its own evaluator. I have learned that it is very difficult for arts organizations primarily in business to produce art for an adult audience to set aside the time OR to attract the money to subsidize a paid expert evaluator. Yet their inability to get some outside corroboration for the claims of their effectiveness as programs limits the power of their self-pronounced claims.  What is a theater company to do if it needs a grant to continue its ability to serve young audiences with professional level performances and it cannot afford to evaluate its programs? How can a dance company be hired to teach kids the choreography and history of ballet, modern or jazz to middle school students if the school board needs to be convinced that this is a worthy use of instructional time? We are a nation that requires "proof" of educational values of cultural learning, alas. A well-designed evaluation, including rigorous assessment measures, conducted by unbiased professionals, seems to be the solution, but where to find the money?    

 

Fortunately, foundations and government funders are increasingly willing to subsidize evaluation as part of a grant in support of residencies, performance series, and or individual lessons in the arts. As readers contemplate their particular dilemmas with E&A, I advise them to keep in mind the following:

 

  • Evaluations must be done in context, taking into account the cost expended on a program, the amount of time students are engaged in the program, and the intentions of instruction. 

 

  • Self-proclaimed claims must be backed by persuasive evidence that justifies the claims such as responses to questionnaires, surveys or interviews, and their alignment with the intentions of the program, or they can cite "before and after" video clips appraised by experts. 

 

  • An evaluation should be a process that includes periodic assessments rather than an end-of-program summary of "what happened" and its apparent impact (check out "formative" as opposed to "summative" evaluation.)    

 

  • Evaluations should be updated periodically, reflecting changes in personnel, students, and intentions. 

 

  • Evaluation preliminary findings should be shared during the course of the evaluation and not just at the end. The client (school, district, arts organization) has a right to know what is revealed and have a chance to address the assertions or readdress the situation that is found in need of adjustment. 

 

  • The client for evaluation must be clear regarding the information that is sought. And the price of an evaluation process, dependent upon these clear expectations, should be competitive within the given community. If the evaluation is designed to address a possible national model, the complexity of the process and the details of the report should reflect this, as should the cost.   

 

  • The partners in the arts education project should agree ahead of signing a contract on the design, regarding the
  • availability of school based data
  • roles and functions of all concerned
  • assessment instruments and protocols
  • a timeline for the evaluation and assessment procedures
  • "deliverables," such as the oral or written report and its contents including an analysis of evidence. 

 

Finally, for a modest program, consider a modest evaluation and assessment process. The dog should wag the tail, not the other way around! 

Resources for further information:

Many of the books are available online or through www.alibris.com  and other second hand book sites. 

Sage Publications.  www.sagepub.com  For many books in print on various aspects of E & A. 

Allen, David.  Ed.  1998.  Assessing Student Learning.  New York:  Teachers College Press. 

Rothman, Robert.  1995.  Measuring Up.  San Francisco:  Jossey-Bass.  

Bloom, Benjamin. Et al.  1981.  Evaluation to Improve Learning.  New York:  McGraw Hill. 

Popham, W. James.  2011.  Transformative Assessment in Action.  Alexandria:  ASCD. 

Lewin, Larry  & Betty Jane Shoemaker.  1998.  Great Performances:   Creating Classroom-Based Assessment Tasks.    Alexandria: ASCD. 

Brookhart, Susan M.  2010.   How to Assess Higher Order Thinking Skills in Your Classroom.  Alexandria: ASCD. 

Fineberg, Carol.  Creating Islands of Excellence.  2004.  Portsmouth, NH: Heinemann. 

Useful for advocates for arts educators are the many articles and resources related to E & A in Edutopia  (www.edutopia.org), and the Arts Education Partnership's two separate websites (www.aep-arts.org and www.artsedsearch.org.)

 

 

 

 

 

Add new comment