What accounts for the gender gap in (high) test scores, for the fact that SAT scores predict college success less well for women then for men, and perhaps for other gaps that persist between men and women? An innovative line of inquiry is carried out in a paper whose title telegraphs what may be part of the answer:
Gender Differences in Willingness to Guess and the Implications for Test Scores by Katherine Baldiga
Here's the Abstract: "Multiple-choice tests play a large role in determining academic and professional outcomes. Performance on these tests hinges not only on a test-taker's knowledge of the material but also on his willingness to guess when unsure about the answer. In this paper, we present the results of an experiment that explores whether women skip more questions than men. The experimental test consists of practice questions from the World History and U.S. History SAT II subject tests; we vary the size of the penalty imposed for a wrong answer and the salience of the evaluative nature of the task. We find that when no penalty is assessed for a wrong answer, all test-takers answer every question. But, when there is a small penalty for wrong answers and the task is explicitly framed as an SAT, women answer signifi cantly fewer questions than men. We see no differences in knowledge of the material or confidence in these test-takers, and differences in risk preferences fail to explain all of the observed gap. Because the gender gap exists only when the task is framed as an SAT, we argue that differences in competitive attitudes may drive the gender differences we observe. Finally, we show that, conditional on their knowledge of the material, test-takers who skip questions do significantly worse on our experimental test, putting women and more risk averse test-takers at a disadvantage."
Katie's experiment is designed as follows. Each subject participates in only one of four experimental conditions, determined by whether there is a penalty for answering incorrectly or not (i.e. whether points are subtracted for wrong answers or not), and whether or not the test is framed as an SAT, by reminding participants that the questions are drawn from SATs and will be scored like SATs. That's the "between subject" part of her design, and the abstract makes clear how those comparisons played out.
The "within subject" part of the design is that every subject participated in three tests. The first consisted of the SAT questions, and subjects were free to skip questions they did not feel confident they could answer. The second was a test of risk aversion. The third test consisted of the same SAT questions as the first, but subjects were asked to answer every question even if they were not confident that they knew the answer. Having the data from the three tests for each subject allows Katie to compare, on the first test, subjects who did equally well when they answered all questions, and to determine how much of the skipping of questions can be accounted for by differences in their risk aversion. This is what lets her see that the women skipped more questions, and got lower scores than the men, even when they could answer the same number of questions correctly. Which could be one of the reasons why the women's scores would predict future performance less well than the men's.
Katie is a theorist as well as an experimenter: here are her other papers, on social choice theory. She is on the job market this year; you could hire her.
If you are at the ESA meetings in Tucson tomorrow you can also listen to her in what looks to be a great session (I've heard all the speakers before): Friday, November 11, 1:20 pm – 2:40pm,
Session 4, Ocotillo: GENDER 2
Katherine Baldiga, “Gender Differences in Willingness to Guess”
Johanna Mollerstrom, “Framing and Gender: It's all about the Women”
Muriel Niederle, “Do Single-Sex Schools Make Boys and Girls More Competitive?”
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.