I have always tested for vocabulary, understanding of concepts, etc. About 2 assessments per quarter, plus all of my projects. With the addition of benchmarks, I know I am teaching less in a year. About 3 weeks is lost to testing, re-teaching, re-testing, scoring, marking, reviewing, etc. I am not a fan. These SGO/SLO tests, are for many, part of a teacher's annual review process and used to gauge effectiveness. For some, poor students success means the loss of a job.
This obsession with testing is having an impact. Senior teachers, the best of the best, are opting to retire early. Though this may seem like a financial boon to some districts, it represents a loss of a critical experienced knowledge base. They know additional testing is hurting kids, and they do not want to be party to a system that is counter to what they know works. what they know is "good for kids." Many will point to failing schools, and surely we need standards, but why schools that are meeting standards already, and those high performing schools are forced to endure this is unfathomable.
Be that as it may, it is the current reality, and if you plan to stay in the profession, you'll need to work within the confines of this corporatized model.
SLO/SGO's are supposed to show growth and I have designed assessments based on that, I use an assessment that easily proves the growth we all experience. How I "test" also matters. I have also chosen to focus on a benchmark that looks at performance based skills, as opposed to "recall" or vocabulary based test. There seems to be a movement in that direction anyway. Some schools actually forbid the use of multiple choice or fill in exams. (Mine does.)
Let's assume you have to give 2 SGO/SLO exams; one near the beginning of the year, and another toward the end to measure growth. What should you test students on? My exams are based on students demonstrating their understanding of the art elements (Line, shape, form, color, etc.) I chose elements because we use them from kindergarten through high school, and even into college. Vocabulary changes, but the elements are infused everywhere. For my assessment, students draw something from observation and then write a little bit about how they used the elements. My rubric is detailed.
In the first test they are draw something in my room. I put out a few simple colorful objects but they nearly always draw something simple. Pencil, book, box, bottle of paint, flag, etc... I do not review the rubric with them except to say "this is what I use to grade your test," but it's there. They hardly read the rubric. So scores are low-ish. A few fail.
In the second test they have to draw their hand from observation. This automatically raises the level of difficulty (naturally scoring higher points. Even a poorly drawn hand is better than good drawing of a pencil or book) AND I review the rubric in detail before they begin testing. I have them underline things they think they will be able to include in the "proficient," and "advanced proficient" columns like layering colors, including shadows, details, textures.
This second test, when compared to the first shows OBVIOUS growth. I wouldn't call it "gaming the system" but if administration is going to rate me on "student growth," and my job hinges on student growth, I will "assess" like this.
I would much more prefer to "test" in the way I did before, but it's not the world we live in.
Free samples of some of my assessments are on my blog HERE.
Additional Resources: "Art Assessments" (Grades 6 to 12) was just revised with 15 SLO/SGO performance based tests. It's 30% off with code 3YPBN853