Chasing Common Core goals isn’t easy; especially, if one teaches at-risk students.
The popular approach is – identify the students at risk, and place them in relevant intervention programs. The problem is- pulling out students from the classroom – to attend an intervention program – threatens their social standing among their peers.
Fact is, pull-out programs take a toll on the social life of the students, affect their confidence, and increase the risk of school dropout.
Schools aren’t just about curriculum. Schools are the place where kids grow up – and growing up is about social and emotional growth too.
Ask yourself, do students, pulled out of class to attend intervention programs, attain the same levels of social and emotional growth?
True, the pull-out intervention programs do have visible benefits. They do help at-risk students excel and approach proficiency. However, they can hinder students’ social and emotional growth too. Pull-out models may help kids master their standards, but at the same time, kids also miss on the interactions that happen in the class. They do not support the holistic growth of a child.
Push-in models, however, are different. All students learn in the same room. This allows everyone to participate – even the struggling students get to attend the classroom discussions, and participate in group activities. This makes the holistic growth of the child a possibility.
Howsoever, some at-risk students do require pull-out intervention. Those kids who are truly struggling and are at risk of being left behind do demand special attention, and must be pulled-out for special instructional sessions. This helps them to catch up with the rest of the class. However, how does one measure the growth of such students?
Should we use a single standardized test to judge all our students?
Unfortunately, popular tests aren’t designed to measure student growth – these standardized tests measure student proficient against pre-defined state standards only. Current state assessments record the results of these tests and label the students as either proficient, partially proficient, or not proficient.
Warning – We need to measure student growth too!
That’s a fact – Someone needs to measure – ‘How much did a student gain during the year’.
Just last week, I came across a news report on FoxBaltimore.com; it highlighted six schools of a Maryland school district with zero proficient students. The article states, “They [The schools] do not have a single student proficient in the state tested subjects of Math and English.”
You can read the news report here.
Such is the trouble with judging students as per proficiency scores, the conclusions of such tagging can not only be horrifyingly inconclusive, but they can be grossly misleading too.
In this case, the test scores fail to reveal anything about where the students started, and what was their growth during the year. The scores say nothing about the hard work that the teachers and the students must have put in during the year.
A little research reveals that most of these schools are high-poverty schools; with some having the percentage of students from low-income families as high as 80%.
Why not to measure student growth too?
This is where the concept of adequate student growth comes in. If the objective of student testing is to ensure career and college readiness, the administration should be concerned with the prediction of future student achievement based upon where students start and their subsequent growth rates, rather than projecting student achievement into the future based upon their past performance.
Food for thought, isn’t it?