Kristie Kannaley
Dr. Dail
ENED 4414
November 2010
Impact on Student Learning
Although I participated in both reading and standard language arts classes throughout my internship, I was given the unique opportunity to teach an enrichment reading class fulltime for two weeks. Although the term “enrichment” sometimes refers to more advance courses, my class was considered an on-level course. However, there were eight students in special education. Five of them were classified as having learning disabilities, and three of them were classified as having other health impairments. Only about a fourth of the students have reading enrichment at a time- for every nine week grading period, the students switch to a different core subject (math, science, reading, and social studies) for their enrichment course. Assisting with the regular language arts classes was beneficial for my planning because I was able to see what students were struggling with, which gave me ideas for mini-lessons to teach as reinforcement in the enrichment class.
Overall, my two-week mini unit was centered around the concepts of mood and tone in literature, movies, and music. The students participated in different types of activities including viewing movie trailers and pictures with different types of music playing in the background. Additionally, we read from several different types of books, such as the class literature book, a children’s book, and an excerpt from a young adult novel. Among my lessons on mood and tone, I also decided to incorporate mini lessons on figurative language, since I noticed that many of the students struggled to remember the difference between some of the literary elements in the narrative pieces that they had written for regular language arts. Specifically, we learned about similes, metaphors, hyperboles, and personification and how they can be used to establish mood and tone in a piece of writing. Additionally, I instructed a few mini lessons about compound sentences and prepositional phrases, since the students were struggling with both of these concepts on their grammar quizzes in regular language arts.
Going into my mini unit, I anticipated a few struggles with the class as a whole. Mood and tone a fairly abstract concepts for the sixth grade mind, and I feared that I would struggle to portray the concepts in a manner that would be suitable for their developmental levels. I do not think I was exposed to these types of literary elements until high school. However, I had a very attentive and excitable group of students who were eager to learn.
The makeup of the classroom was very reflective of the overall statistics of Cobb County schools. There were twenty-five students in my class. Fourteen of the students were female, while eleven of them were male. According to the Cobb County School District Website, about 45% of students in Cobb County schools were classified as white in March of 2010. Similarly, my class was made up of almost entirely white students. Overall, there were twenty-one white students, one Latino student, two black students, and one student who was difficult to racially classify but who looked like she was from somewhere in Europe. I was surprised to see very few Asian or Latino students throughout the entire school, especially since I have conducted field experiences in other schools in the area that have been much more diverse. However, the overall statistics for demographics in Cobb County are: 45% white, 31% black, 15.8% Hispanic, 4.8% Asian, 2.5% multi-racial, and <1% American Indian (“About the Cobb County…”). As of 2008, Lost Mountain Middle School was composed of 82% white students, 12% black students, 2% Hispanic students, 2% Asian students, and 2% unknown (“Lost Mountain Middle School”). After seeing these statistics, I realized that my class was actually average for the county in the area of demographics for both the school and the county as a whole.
Furthermore, Lost Mountain Middle School seems to be filled with many students who come from affluent families. It is located in a suburban area. As of 2008, there were approximately 1107 students enrolled, and 4% of them were considered eligible for free lunch. This number is incredibly low, considering that the average percentage of students eligible across the state is 43%. Furthermore, only 2% of Lost Mountain students were eligible for reduced lunch, while the state average was 9% at the time. These statistics were also reflected outside of the classroom. For example, the median income for residents living in the same zip code area as Lost Mountain Middle School in 2008 was $70, 741. The average for the same year in the rest of Georgia was $44,644 (“Lost Mountain Middle School”).
Although the average income for families attending the middle school was higher than the state average, the resources available in the classroom seemed to be the same as those available in schools with middle-class families. Although there were a few computers in the back of the room, there were only a few computer labs in the whole school, and I did not come across any class sets of laptops, which is often seen in very affluent schools. However, the classroom that I was placed in had enough desks for each student and almost a whole class set of text books. Technological features such as an overhead projector screen that could be connected to the teacher’s lap top were also available. The environment of the classroom was very soothing and pleasant, especially since my collaborating teacher brought in about seven lamps so that she could turn off the florescent lights. Additionally, the room was covered with posters about different grammatical concepts the students were studying. Since my collaborating teacher also had a co-teacher to help support the special education students, there were two desks in the room, which provided adequate space for the teachers to keep some of their belongings and paperwork. Convienantly, the classroom I worked in was connected to a small meeting room with a large table. This room worked nicely when students needed to be in a more quiet environment to complete assignments, and the door could be left open so that the teacher could easily monitor students in both rooms at the same time. Overall, the ambiance of the room was welcoming, even though some of the highly technological devices that one may see in very affluent schools were not present.
I decided to use a pre and post test so that I could compare the students’ progress using very similar formats. The pretest was composed of a variety of question types, including multiple choice, short answer, and short essay questions. I chose to use the multiple choice format for my first five questions, which covered the definitions of mood and tone and asked the students to identify what type of figurative language was being used in example sentences. Next, the students were shown two sentences that were labeled “Sentence 1” and “Sentence 2.” The students were then asked to fill in the blanks with various information about the sentences, such as what type of sentences they were (simple vs compound) and prepositional phrases. The students were also asked if one of the sentences was an independent or a dependent clause. Furthermore, the students were also asked to answer two essay questions that asked them to write about their lives using language that reflected a particular mood or tone. For example, one of the prompts was, “Write three to five sentences about your favorite class at school (and why it is your favorite). Make sure you use adjectives that help establish a positive tone. Think about the details from the class that you can share with your reader to help establish a cheerful scene.” Many of the students excelled at this type of question on the pretest, though I am unsure if it was a good measurement of whether they really understood the difference between mood and tone. However, the answers were definitely better the second time I gave the test. Lastly, the students were asked to write a simple sentence about their first day of sixth grade and a compound sentence about what they wanted to eat for lunch that day.
My post test was almost identical to my pretest. I took out a question that addressed complex sentences because we did not end up covering that topic. Also, I rearraged the point values for the different types of questions. Initially, the multiple choice questions on figurative language were worth more wholistically than the essay responses on mood and tone. Since we focused more on mood and tone than on figurative language, I thought it would be more practical to amount more points to the concepts we spent more time discussing in class. Both of my assessments apply directly to my unit because most of the questions require the students to know the terminology above a simple comprehension level to answer them correctly. We spent the whole two weeks looking at mood, tone, and figurative language in multiple types of text. Additionally, we covered the types of sentences and prepositional phrases in grammar mini lessons throughout the unit.
Analyzing and Reporting Data
As expected, most of the students did really poorly on the pretest. Although they were unsure about the content, I instructed them to try to make their best attempt. Below is a bar graph of the grade breakdowns of the entire class for the pretest.
external image clip_image002.gif
Although there are twenty-five students in the class, one of them was absent for the pretest, and I did not require her to complete it when she returned to school. For the purpose of this analysis, I have not included her in any of the charts. The vertical numbers listed to the left of the chart indicate the number of students, while the horizontal letters at the bottom indicate the grades. An “A” represents a score of 90-100, a “B” represents a score of 80-89, a “C” represents a score of 70-79, a “D” represents a score of 65-69, and an “F” represents a score below 65. Overall, no students received an “A” on the pretest, and only one student received a “B.” Additionally, only two students received a “C” or a “D.” Twenty-one out of twenty-four students received an “F” on the pretest, which indicates that they had either never been exposed to the information or were still unclear about most of it. I know that the students in my other language arts classes were familiar with most of the figurative language and grammatical terms, yet the pretest exposed some inconsistencies in their application of the concepts. For example, many of the students were confused about the difference between a preposition, a prepositional phrase, and an object of a preposition. The pretest was very informative because it allowed me to see what the students were thinking so that I could help clarify the concepts they were misunderstanding. Additionally, since most of the students missed the multiple choice questions on “mood” and “tone,” I could draw the conclusion that most of them had probably not learned the concepts.
After the two week unit, I gave the students a post test that was nearly identical to the pretest. The only differences were that I took at the question about complex sentences and added another one about prepositions. Additionally, I changed the weighting of the questions so that concepts that we had more time to unravel were worth more than ones we did not spend a lot of time reviewing. Below is a graph of the results of the post test.
external image clip_image004.gif
Overall, the class improved throughout the unit, though some of the grades were lower than I expected them to be. While the highest grade was a 100%, the lowest was a 50%, which means that some of the students are still struggling with many of the concepts. Overall, 4 students received an “A,” and six received a “B.” Five students received a “C,” while three received a “D.” Five students failed the exam with grades below 65%. Although some of the grades were still failing, almost every student improved his or her score from the pretest. One student received the same score on both tests. In most cases, the improvement was significant. The average increase was by 24.5 points, with the highest increase being by 55 points and the lowest by 0 points. Fourteen students increased their scores by at least 30 points. Overall, these scores show that the students have become more proficient in their understanding of mood, tone, figurative language, compound vs simple sentences, and prepositions. Though the two week unit did not lead each student to receive a passing score, the numbers indicate that it was not detrimental to the students’ understanding, since everyone received a grade that was greater than or equal to their initial scores.
Since I had nine special education students in my class, it is important for me to examine their improvement as compared to the students who do not have individualized education plans (IEP). Five of the students were individuals with learning disabilities, while four of them had other health impairments that were not specified to me (though my collaborating teacher mentioned that some of the students had attention deficit disorder). Below is a graph of the pretest scores of the students with special needs compared to the scores of students who do not have IEPs.
external image clip_image006.gif
The chart above shows that the only students who received a “D” or higher on the pretest were students who did not have any classified disabilities or impairments. However, most of the non-IEP students failed the pretest as well, so it is difficult to tell the difference between the two groups. The average score for special education students was a 38.62%, and the average score for the rest of the class was 48.75%. Keeping in mind that there were many more students in the second group, the differences between the two were not as profound as one may think. After all, nineteen students scored between a 30% and 50%.
The differences between the scores of each subgroup can be seen more prominently when comparing the post test scores. The chart below shows the grade range for both groups.
external image clip_image008.gif
According to the chart, the almost the same amount of IEP students received an “A” as the non-IEP students. However, a greater percentage of non-IEP students received a “D” or higher than IEP students. The point spread was greater for the larger group than for the smaller group. Below is a chart indicating the differences in the point increase between the two groups.
external image clip_image010.gif
According to the chart, almost all of the IEP students improved by at least eleven points, and half of the improved by at least twenty-one points. The average increase in scores for IEP students was 20.5 points. On the other hand, most of the non-IEP students improved by twenty-one to forty points, which is a little more than the IEP students. The average increase in scores for non-IEP students was 28.68 points. Since the average increase for IEP students was lower than for the non-IEP students, the IEP students probably needed more support or remediation of the concepts discussed in the two week unit.
Another target group in the classroom is often the male population, so I have decided to compare test scores between the males and females. Below is a chart that displays the pretest scores for both groups.
external image clip_image012.gif
There are eleven boys and thirteen females in the class. Although there are more females, less female students had IEPs than male students. Overall, there were five boys with IEPs and three girls with IEPs. The pretest scores were similar for both groups of students, since almost every student received an “F.” More boys received a score higher than an “F,” yet there were only three students who scored higher than a 64% in all, which means that there was little difference between each group on the pretest. The average score was 44.9%, while the average score for the boys was 45.5%. Both genders performed nearly equally on the pretest.
Below is a chart indicating the scores for both genders on the post test.
external image clip_image014.gif

Overall, the grade spread was much greater for the girls than it was for the boys. Six boys made a “B” on the post tests, while five boys made an “F.” On the other hand, only four girls made and “A” or a “B.” Seven made a “C” or a “D,” and two made an “F.” More males failed the test than females, yet it is difficult to tell which gender actually had more success due to the spread in grades. Below is a chart indicating the point increases for both genders between the pretest and the post test to better clarify any differences.
external image clip_image016.gif
Through the chart on the differences in point increase, it is much easier to tell that the females tended to have more of an increase in their scores between the pretest and post test than the boys. Overall, the average point increase for males was 23.36 points, while the average point increase for females was 28.92 points.
For my third sub-group analysis, I have decided to analyze the differences in scores between the student who scored the highest on the pretest (Student A) and the student who scored the lowest on the pretest (Student B). Student A is a male who is does not have an IEP, and Student B is a female who has an IEP for “other health impairments.” Below is a graph indicating the differences between their two scores.
external image clip_image018.gif
external image clip_image020.gif
The pie charts above represent the differences in the percentages the students made on the pretest. The blue areas indicate the percentage of questions the students answered correctly, while the red areas indicate the percentage of questions the students answered incorrectly. Student A received an 80% on the pretest, while Student B received a 29%. This is a huge difference and indicate the top and bottom areas of the spectrum of how much the students already knew about the topics we were going to discuss in the two week unit.
Below is a chart of how Student A and Student B performed on the post test.
external image clip_image022.gif
external image clip_image024.gif
Both of the charts representing the post test scores look very similar to the pretest charts. Student A received an 84% on the post test, while student B received a 30%. Neither of the scores were very different from the initial scores. Numerically, neither student significantly improved. Student A increased his score by four points, and student B increased her score by one point.
Overall, I was pleased to see that every student improved or made the same score on the pretest and the post test. Numerically, every student’s score benefitted from the mini unit, except for one (which was the same score as the pretest). However, the most rewarding part of this analysis was looking at the point increase for the class. The average increase was 24.29 points, which means that the students answered about a fourth more of the questions correctly on the post test than they did on the pretest. As a class, the students were most successful on the questions addressing mood and tone. This result makes sense because we discussed mood and tone every day throughout the two weeks, while we only spent time covering other concepts on certain days.
I believe that the instruction strategy that helped the students become successful on this part of the exam was how I differentiated instruction. Throughout the unit, I used movies, songs, pictures, and books to help the students learn how to identify the differences between mood and tone and understand how authors create these elements in their texts. The most effective lesson was the one where I had the students listen to music. I played them five different songs from classic movies (like Star Wars and Jaws) and had them draw a picture to represent the mood they feel after listening to each song. Then, I played the five songs again, yet I showed five different pictures on a PowerPoint presentation, one for each song. The students had to write a sentence about the mood they felt for each picture and song combination. The catch was that the pictures did not match the songs. For example, I showed a picture of Dora and played the song from Jaws that comes on when the shark is about to attack. Additionally, I played the theme song from Star Wars and paired it with a picture of Barbie. The students thought this activity was difficult. I think this was when they really understood how to identify mood and tone and how different devices can be used to completely change a story.
On the other hand, some students did not have a huge increase in their scores, which means that there are students who are still struggling with some of the concepts discussed in the unit. Overall, I believe the students struggled the most with identifying simple versus compound sentences and with identifying prepositions and prepositional phrases. Honestly, I did not feel very confidently about how I taught the students to identify compound sentences. The concept seemed so much easier to teach when I was producing the lesson plan. After all, I remind students of how to form a compound sentence at the Writing Center almost every day, so I did not anticipate struggling to teach it to sixth graders. I gave the students a handout so that they could follow along, and I wrote an example or two on the board. I think part of the reason the students were not successful is that they were unsure of the difference between an independent and dependent clause. My collaborating teacher and I tried to explain that an independent clause “makes sense” on its own, while a dependent clause does now. However, by the end of the lesson, I could tell that the students were still confused.
Next time, I think I will try to break the concepts down into smaller parts. I gave the students the following formula to identify a compound sentence: Simple sentence + comma + coordinating conjunction + simple sentence = compound sentence. My collaborating teacher suggested that I use the following formula to simplify the concept: I + , + CC + I = compound sentence. Also, I should have taught the students the definition of a complex sentence and
focused on the differences between the two. Some of the students gave me examples of complex sentences when I asked them for examples of compounds. I believe they made this error because I told them that each side of the sentence has to make sense by itself. This fact is true for both a compound and complex sentence, hence the confusion.
The first subgroup I examined in my analysis was the students with individual education plans (IEP) versus the student who did not. Although most students failed the pretest, the special education students tended to score particularly low. The highest score for IEP students was 60%, and the average score was 34.3%, while the average score for non-IEP students was 48.75%. More importantly, the average score on the post test was 61%, while the average post test score for non-IEP students was 80.35%. Although there is a huge discrepancy between these scores, the point increases for IEP student and non-IEP students were not significantly different.
Although I differentiated my whole-class instruction, I did not differentiate how I taught the IEP students versus how I taught the non-IEP students, which may be while there was such a huge gap in the scores. I should have supplemented the IEP students with more one-one-one discussion. This could have been accomplished if I would have had the students work in small groups more often throughout the unit. Additionally, I should have differentiated my handouts so that the students who struggled to pay attention in class would have a more thorough outline of what we discussed in class. Furthermore, I could have slowed down some of my instruction. I tend to talk to quickly, and I did not allow enough time between asking a question and calling on a student. Students with learning disabilities may need more time to process information. Therefore, if I would have given the students more time to think or given them time to write about the question, the IEP students could have been more involved with the lesson and thus more successful on the post test.
The second subgroup I chose to examine was the scores of the boys versus the scores of the girls. From what I observed, students on the sixth grade level tend to be fairly talkative, which sometimes keeps them from hearing the lesson. The boys were much roudier than the girls in the period I taught. Additionally, more males students had IEPs than female students, which can also play a role in the differences in the test scores. Overall, the average post test score for the males was 69.27%, while the average post test score for the females was 73.8%. Although these averages are not too far apart, the females tended to score higher than the males.
Honestly, I am not sure how to explain the differences in the scores. I tried to plan my lessons in ways that appealed to both genders. I chose stories with male and female protagonists, played music from action movies, and showed pictures that both genders seemed to recognize and identify with. Perhaps these scores relate back to my first focus group: IEP students. Almost half of the male students in my class had IEPs. Although I tried to give the entire class ample support throughout the unit, I should have been more careful to differentiate for the IEP students. I think part of this issue lies in the fact that I did not find out what types of disabilities the students were dealing with until I was finished with my unit. Additionally, I have not had a lot of practice with teaching in general, especially with teaching students with learning disabilities. Next time, I will try to find out more about the IEP students at the beginning of the unit so that I will be able to better accommodate them. Additionally, I will have more conversations with the co-teacher to see if he or she has any suggestions for working with these students.
The third group of students that I specifically analyzed consisted of the student who scored the lowest on the pretest and the student who scored the highest on the pretest. It is interesting that both of these students had some of the lowest point increases after taking their post tests. Student A went from an eighty percent to an eight-four percent, while Student B went from a twenty-nine percent to a thirty percent.
From the beginning of my unit, I realized that I may struggle with these students. Student A misses a lot of school and tends to get distracted in class. Sometimes, he fails to participate in the activities and does not turn in all of his work. Student B has an IEP for “other health impairments” and also misses a lot of school. She becomes even more distracted during class than Student A, and she generally half-way completes her work.
Interestingly, even though Student A and Student B had similar classroom habits, my instructional techniques worked better for Student A. He found the mood and tone lessons to be engaging, particularly the lessons that allowed him to come up with hyperboles or listen to music. Student B, on the other hand, struggled to pay attention to many of the lessons. If I could redo my unit, I would add activities that require more group and one-on-one interaction between classmates. Many of my activities were completed as a whole class, and I think Student B could have benefitted from having to participate with another student. Having someone sitting there and talking with her would probably help her stay focused for longer periods throughout the lesson.
Overall, I believe my assessment was effective because it required the students to think on numerous levels. The writing sections forced them to apply their knowledge instead of just regurgitating it. If the students were unsure of how to form a compound sentence, they would struggle to just “guess” the correct answer, since they had to produce their own compound sentences. However, the questions that required students to label sentence parts included compound sentences. Students who were familiar with compound sentences should have recognized the example and therefore gotten the question that asked them to write a compound sentence correct.
On the other hand, I do not think my questions that asked the students to choose which type of figurative language was being shown in an example were very effective. Many of the students missed these questions, but I know that they understand the terminology because they have come up with examples for me during class. I think the examples I chose to include on the test were fairly difficult to identify. Next time, I may have students come up with their own examples on the exam.
In addition to the test, I used several other methods to assess the students throughout my unit. One of my favorite assessments was the performance task because it was probably the most challenging assignment I gave them. Basically, I asked the students to write a newspaper article in response to a video I showed them about robots and superheroes. Half of the class had to write from the side of the robots, while the other half had to write from the side of the superheroes. I asked the students to use a particular tone in their articles. Additionally, I required that they included at least two compound sentences and at least one example of either personification or a hyperbole. This assessment was effective because it showed me whether the students understood the information well enough to combine all of the elements together.
Furthermore, I assessed the students through several exit tickets. Some of them were not very effective because they basically showed that the students had paid attention in filled out the information, yet they did not always show whether the students truly understood the information. On the other hand, some of them were very effective. I had the students come up with their own examples of figurative language, and I could tell whether or not they understood the information based off of the examples they wrote. One issue that I had with these assignments is that several of the students did not turn them in because they were too eager to leave at the end of class. Next time, I will be more organized with how I structure the lesson and will take up the exit tickets as the students walk out the door.
For the most part, the assessments reflected how the students did on the post test. Students who understood the information tended to turn in their exit tickets. In particular, the students who were successful at incorporating all of the elements into the newspaper articles received high test scores. In contrast, I was surprised about some of the grades the students received. Some of the students who frequently answered questions correctly during class were not as successful on the test as I would have thought they would be. This may have something to do with the format of the test of the students’ comfort with taking tests in general.
One of my professional learning goals as a result of this experience is to be more comprehensive in my differentiation. I feel like I was successful in differentiating the types of lessons I led, yet I do not think that I planned well enough for helping the students with IEPs. Next time, I want to find out more information about these students at the beginning of the experience so that I can better accommodate them. Additionally, I want to work on better managing the classroom. Although the students were engaged in the actual lesson, they often became too excited and had trouble controlling themselves. There was a lot of calling out during class time, and I sometimes had trouble hearing the student I had called on.
In order to reach these goals, I plan on researching more information about how to help students with learning disabilities and other exceptionalities. There are many resources available to me, including professors, academic journals, and text books. Additionally, I plan on talking to other teachers about how they manage their classrooms so that I can acquire some new techniques to use in my own classroom.
Works Cited
“About the Cobb County School District.” Cobb County School District. Cobb County School District, 2010. Web. 10 Nov. 2010.
“Lost Mountain Middle School.” Public School Review. Public School Review LLC, 2010. Web. 10 Nov. 2010.