I interviewed Stephen Harris, former LSAT question-writer and author of Mastering Logic Games. (He’s written hundreds of the questions that appear in your books of LSAT PrepTests.)
This is part 7 of the series of interviews. You can also get them all in a free book I put together.
Our discussion follows.
***
Do you think the LSAT is really relevant to law school (and serves as a good gatekeeper test)? Or do you think it would be better if it were based on “prelaw-type questions” instead?
I actually do think that the stuff that the LSAT tests is pretty close to the general-purpose skill set that many lawyers need to be successful. Law is a huge field with a wide variety of jobs, but many of them in fact involve constructing and evaluating arguments and carefully reading complex texts. The LSAT does a pretty good job of identifying folks who do these things relatively well, in my opinion. That said, many people who do not score well on the LSAT would still make perfectly fine attorneys. As for “prelaw-type questions” if you mean questions that test content, or “know that,” rather than practical knowledge, or “know how” (which is, I think, the best way to think about the LSAT), then I don’t think that this is what we want a test like the LSAT to try to measure. To the extent that law schools find content knowledge relevant they could probably do no better than to check your college transcript.
If the LSAT is supposed to be (and is) a predictor of how well one does in law school, why not include a graded writing section, considering that how you do in law school is based solely on essay-style exams? Wouldn’t this make it an even better predictor?
A few thoughts. To begin with, the LSAT is just one component of the admissions process and is designed to assess a limited range of skills, and to be used in conjunction with other information to evaluate applicants. Yes, the more information a school has about its applicants the better they are at picking them, but it isn’t clear that it is the LSAT’s job to provide it. After all, transcripts, recommendations, and all the rest can reveal things about a student that no one test should even try to measure. And even if the LSAT should be changed, I’m not convinced that adding a graded writing section would be the way to go. First of all, I’m not sure how much useful information one would get from a timed writing assignment like this. Then there’s the problem of uniformly grading written material, which is tough to do if the writing task is at all sophisticated. (This would probably lead to many unhappy test takers challenging their scores, if this option even remained available.) Then there’s the issue of expense – an essay would certainly increase the cost of taking the LSAT, making economic considerations even more significant to the decision whether to apply to law school. On the whole, there really doesn’t seem to be a compelling case for the LSAT to take on this extra responsibility.
How do you choose what kind of skills/patterns to emphasize? For example, around 10 years ago, in/out games were not very common, then they started becoming super common. Then, a few years ago, they’ve tapered off. Do you get directives from LSAC?
“We’ve got too many if but only if questions, lay off for a few months, will you? And for crying out loud, it’s been too long since we’ve had a good mapping game, or a circular setup game, get on that!”
Almost all of my experience writing LSAT items involved logical reasoning, but I imagine that similar considerations apply to the other sections. An assignment consisted of 10 items. Typically, the assignment would specify a certain number/range of items of various types, and would generally leave a bit of leeway for a couple items of pretty much whatever type you wished. They would definitely reject items that were technically fine, but because they had plenty of that type or on that topic. (“No more dinosaurs and asteroids for a while!”)
As for the larger question of trends from one year to the next, good question. But I bet decisions are based at least partially on their internal metrics of validity; that is, they find that certain games are helpful in sorting test takers the way they want them to be sorted.