Saturday, August 28, 2010

Data Analysis

I have finally finished grading the pre and post tests from Troy and have entered that data into a spreadsheet. Unfortunately, the data I have is not the complete set from the summer. My house was burglarized and my backpack with most of the pre and post tests from the summer were taken. I had not gotten a change to digitize these before this happened, so everything that was in my bag was lost.

The initial pre test that was administered at the very first session was the only one I had that was not in my backpack when it was stolen. I was able to administer that test at the final session to try and recover some of the data that was lost, but I don't think it was replicated quite the same way as the original tests. Since the post test was administered six weeks after the pre test, and three weeks after the conclusion of the sessions applying the concepts on the test, some concepts could have already been forgotten.

The program also started with 18 students, while at the end there were only seven, so the little data I gathered may not have been representative of the whole group. One of the students present at the last session began the program a few sessions later than the rest. Of the six people who were present during the course of the whole program, only two of them had improved scores over the pre test, while the other four had slightly lower scores than they did on the pre test.

As I mentioned before, I think the length of time between the administration of the post test and the conclusion of the sessions applying those concepts could have been one of the reasons for the poor post test scores. Another reason could have been due to the kids' attitudes towards the tests. I administered many tests over the course of the program: one at the beginning of each session and one at the end, so there was a test almost every two to three sessions. The farther into the program we got, the more complaints I received about the tests. Eventually, some of the kids would rush through the tests without trying to apply themselves or answer the questions. Some of them tried to do the same thing on the final post test, even after I explained my situation to them.

I would still like to comment about the success of some of the other sessions, even though I do not have any official conclusive data. Some of the students had said at the end of our final session that cultural history was boring, but I think that in the long run it kept them more interested in the math concepts than they would have been if I had just tried to teach them the material. The interactivity of the CSDTs really helps to interest the kids in what they are doing, rather than focusing on the math concepts.

One of the things I did notice about my teaching that I would like to improve upon is that the kids were able to understand the concepts behind the math, but they weren't necessarily able to apply that to answering the questions that were on the pre and post tests. I would have liked to help them better make the connection between the concept and answering the questions.

I think the program in Troy was an overall success, even if we don't have hard data to support that. Even though the students were not able to retain all the knowledge they had gained over the summer, I still think they benefited from the program. If I can improve my teaching skills to allow the students to apply the concepts they learn, I think the students will learn much more and be able to retain the information.

No comments:

Post a Comment