Division of Science Resources Studies
Professional Societies Workshop Series

Attrition: What Can We Measure? How Should We Measure?

Charlotte Kuh, Office of Scientific and Engineering Personnel, National Research Council

Thank you. I, too, see a number of friends from multiple incarnations in this audience, and it is really nice to be here today. When Alan Rapoport invited me to speak on this topic, I will admit to a very personal flashback to November of 1975. I was an acting assistant professor at Stanford, I was teaching two courses, I had a two-year-old and a four-year-old at home, and I was working long-distance with my thesis advisors to complete my dissertation at Yale. One day I received a letter from John Perry Miller, who was dean of the graduate school at Yale. It said, we would like to know why you have decided not to complete your doctoral program. I was furious and scribbled on the bottom of the letter, I am completing my doctoral program, and expect to receive my degree in June, and mailed it back. So here I am, an attrition statistic, resurrected.

It is clear, in retrospect, that John Perry Miller, may he rest in peace, was measuring attrition as failure to complete by a given date in this case, seven years after entry into the program. He had not checked with my advisors, he just applied the date. That makes some administrative sense, something I understand now. But is that the way we want to measure attrition?

I think I was asked to speak to you, however, not because of my shady past, but because I co-chaired an NRC panel that looked into the question of whether it was possible or desirable to create national measures of attrition for graduate education. I am going to focus primarily on the quantitative side of the question, and others today are going to be talking about the qualitative side. This panel concluded that creation of national attrition measures was desirable, but not possible, and I would like to explain the reasons for that conclusion.

Let's begin by asking: What is attrition? Given an entering cohort, attrition is the proportion of a cohort that does not complete the graduate program undertaken. Simple. Now let's measure it. We immediately run into problems. We must ask: Do not complete by what date? Seven years used to be the norm, but average time to degree in a number of humanities fields is now over 10 years. We would not want to hold them to a seven-year limit. That would give us grossly overstated rates of attrition. There is an even worse problem here. When does a program know that a student has attrited? Many but not all schools now have students who have finished their course work and pay reduced tuition until they complete. A student is assumed to have dropped out of the program when the money stops. But, in fact, and as Claudia mentioned, they may return years later and complete. It is clear from an examination of the AAU-AGS data that I think you will hear about later today, that there is lots of stopping out and dropping in. How do we get a handle on such a mobile population?

Here is another problem. What is an entering cohort? Some programs offer only Ph.D. programs and an entering cohort is relatively easy to define. Those are typically research universities that only offer Ph.D. programs and master's degrees are kind of viewed as consolation prizes. Some programs do not admit students to doctoral study until after completion of the master's degree or after completion of comprehensive exams, or even after defense of the dissertation prospectus. A national measure of attrition would have to standardize across a highly diverse set of institutional practices, and I am sure there are people in the audience who know the diversity of those institutional practices.

Those are the reasons why I think, and my panel thought, it would be extraordinarily difficult to obtain a national measure of doctoral student attrition. But we also said it was desirable if such a measure could be designed.

The next important question is: Who wants to know? First, students. Students deserve to know what they are getting into ahead of time and certainly should know if they are choosing between an endless program and a shorter one with a higher chance of completion. Attrition statistics could be useful as a sort of truth-in-packaging, and it would not hurt at all. They need to know if, on average, students in the programs that interest them are likely to complete the program. Students also need to know whether they get boot camp in the first year or whether people encourage them to get into the field.

Attrition information would also be useful to faculty and administrators who wish to compare programs at their own institutions to those elsewhere. It could be useful information to improve the treatment of graduate students across institutions. Such information would also be useful to policymakers at the state and Federal levels who want to know if the resources they expend on graduate education are being used efficiently.

We also need to remember that attrition is an outcome, again as Claudia mentioned. As such, data on attrition need to be collected along with other data that we think would be useful in explaining the outcomes of graduate school. For example, do rates of attrition vary with type or amount of financial aid?

But frankly, as I thought about it over the weekend, it occurred to me that attrition and its causes may be much more slippery than a simple measure of efficiency. As an economist, I am permitted to say, let us assume we have collected good data on attrition. Now the question is: What good is it? It would be a fine outcome to measure if less attrition were good and more attrition were bad. For example, more financial aid would probably lower rates of attrition. But what about the effect of better information about employment prospects? Higher rates of attrition may reflect a response to better information of what happens upon completion as compared to other opportunities for enrolled graduate students. Is that attrition bad? Did somebody make the wrong decision going to graduate school only to learn that there were endless postdocs at the end of it? Would less information lower this kind of attrition? How would we lower those rates, and would we want to do that anyway?

Finally, in this age of paperwork reduction and government reinvention, we need to ask: Is our ignorance so great that we need to develop a uniform data collection methodology to which all universities that offer graduate programs should adhere? If not, should NSF serve as a collection point for data that universities do collect but in doing so lose institutional comparability? Should the Federal Government collect data on attrition at all, especially if there is no guarantee of quality or comparability? Might we learn more from carefully targeted qualitative studies or might we learn more from following a cohort or successive cohorts as they move through graduate education? The problem is that you can tell a lot about what happens to students through surveys without collecting data from all institutions at enormous cost. But this sort of data collection will not tell you about what happens to institutions or what happens to programs in particular fields. Getting that sort of detail would be very expensive; on the other hand, you could probably manage it with data from a panel of schools.

So really I wrote a talk with a couple of questions and all I am going to leave you with is a bunch of questions. I am really looking forward to the rest of the speakers today. I have spoken very briefly and I hope that together we can start to answer some of the questions.

Jeanne Griffith: Thank you very much, Charlotte. I think it is perfectly appropriate to leave us with questions, we surely need them. We have some time for some questions for our speakers before we take a break this morning.

Gerard Crawley, Michigan State University: Perhaps Charlotte's one suggestion might be that instead of trying to measure just a single number for attrition, one should look at a time sequence of what happens to students and then you can look at registration as a function of time and maybe success. Getting degrees is a function of time. If you look at those profiles for different departments, you do get a very interesting picture of different departments' performance. It would not matter so much about an individual student whether he or she is still active after 8 or 10 years or so but you would get a view of what is happening in different departments.

Charlotte Kuh: I think that is a very good idea. I should add that when I talked to my husband at dinner last night about what I was doing today (he is a mathematical statistician and economist), I got a lecture about light bulbs. And it turns out that you do not have to follow every light bulb in your house to figure out when it is going to give out. In fact, you can observe what has happened to old light bulbs and then look at what is happening to new ones, and assume a distribution. You can compare programs in the way that you suggest. I think that if the statistical people at SRS or elsewhere think a little bit more about this (I do not know whether graduate students are as reliable as light bulbs or as carefully engineered), it would be very interesting to think about some statistical techniques to model attrition.

Barbara Lovitts, University of Maryland: You mentioned that it is difficult to define whom a Ph.D. student is because institutions use different definitions. However, graduate students define themselves as doctoral students or not doctoral students regardless of what stage they are at and regardless of how the university is defining them. The student who is in the first year of a program who wants the Ph.D. will define himself or herself as a doctoral student, even if the program does not. Of course, you are not going to get accurate information from students' applications for admission because they all know the game that if you do not say that you want a doctorate, you are not going to get funding. But if you were able to have an independent survey of entering students and get their self-definitions of whether they are intending to get a Ph.D., whether they are intending a master's, or whether they are not sure of their degree goals and you would need to look at changes in self-definition over time then you would have a more accurate definition of who is a graduate student and who becomes a Ph.D. attrition statistic.

Claudia Mitchell-Kernan, University of California, Los Angeles: Of course it is at times appropriate to examine students' own perceptions of whether or not they are in fact Ph.D. students and what happens to them. However, reliance solely on students' self-designation to measure attrition introduces problems of comparability across institutions. Even at the institutional level some conceptual refinements are necessary to formulate a measure of attrition that is comparable across institutions and departments within institutions, and that permits institutions to determine whether or not they have an attrition problem that they can then do something about. Somehow, one has to strike a balance between the institutional and the individual perception.

Orlando Taylor, Howard University: I want to begin first by thanking Claudia and Charlotte for their wonderful presentations to get us started this morning. But we are talking about graduate student attrition, and we have in an unspoken way given short shrift to the master's degree student particularly in an environment were a disproportionately large number of our students are indeed master's students and are seeking terminal master's degrees.

Charlotte Kuh: That is probably true, and we have worried more in some ways about doctoral students because of our investment in those students and, in part, because of our traditional view of the sector of the labor market they are likely to enter the academic sector. We now know that the labor market for advanced training at the graduate level is much more differentiated. My own experience from looking at institutional data suggests that those who enter master's programs are largely individuals entering professional areas. In many respects, the professional areas, rather than the academic areas, have a better record of success if it is measured by degree completion.

So part of the answer is that the master's level has not been the same kind of problem that the doctoral level is. There is another side of this. When one wants to try to sort out what are some of the sources of variance, the master's students in many of the professional areas are also more mature students. They have had an opportunity to be out in the world of work for a time and they have made a conscious decision that they need to return to pursue some additional advanced training, and they may be somewhat more sophisticated about their choice of area for advanced training.

There is something that has intrigued me over the last couple of years, as I have sat on the research committee of the Graduate Record Examination Board, and it is work that has been done by an ETS researcher named Jerilee Grandy. She has been interested in careers and in reasons that people leave different areas for other kinds of careers. In some ways, it is under the influence of some of her findings that I have said that failure to complete is not always a problem. Not only do people not complete the doctorate for good reason, but also sometimes they leave those fields for which they were trained to go to other fields. As we are seeing more rapid shifts in our labor market, we should expect that people are going to be changing fields more often. Some of the fluctuations that we witness across both master's and doctoral education may reflect those decisions in the face of a more fluid labor market.

Jules LaPidus, Council of Graduate Schools: I was very interested in Dr. Bertenthal's graph because Derek De Solla Price's figures and David Goodstein's figures were used at one time to show what was going to happen eventually, and now they are being used properly to show what is not going to happen in the long-run or what has not happened. This seems to me another validation, perhaps, of the fact that the collection of data and the analysis of data are very good to tell us where we have been and where we are, but we really have a lot of problems when we draw those lines out to try to tell us where we are going. I do not think that graduate students are as well-engineered as light bulbs, and this is one of the problems when you get into biological or social systems, because it really varies a lot.

I know we are going to have some discussion this afternoon about a qualitative approach to this, but in thinking about the discussion so far, I am reminded of the fact that a few years ago the Committee on Mathematical Sciences in the NRC did a study of why certain doctoral and postdoctoral programs were particularly successful in providing completed programs for women and for American minorities. I always thought it was wonderful that the mathematicians, particularly, decided not to do a quantitative study, but actually to go out and talk to people. They found, as many of you know who have seen that study, that three factors were involved. They are all big surprises. One was the supportive learning environment. Another one and I think this is an extremely important one for doctoral studies is that there seemed to be agreement among the faculty, the students, the administration, and probably the employers as to what the purpose of the program was. That is something I think we do not think about often enough. Third, there was some opportunity in these programs for students to actually function as professionals, that is, to get involved with their fields and to do the things that people in those fields do. I thought that was very interesting, and I think it is going to be interesting to see the data that come out of the Berkeley Graduate School study on what happens to people 10 years later.

I very much agree with Claudia that we talk about attrition as if it had negative connotations, and it is not necessarily bad. Some people leave because they made the wrong decision. Some people leave because they had a very good experience and have decided to do something else, and that is fine. There is a cost question there and there is a very important public policy question. When somebody talks about people getting Ph.D.s and then becoming K-12 teachers first of of all it is hard for me to tell whether that would be very good for K-12 or not but above that, that is a pretty expensive way, as my colleague Peter Syverson tells me, of preparing K-12 teachers. Is that really what we want to do?

I think we have to look at all of these questions. But I am just raising a flag hereabout spending a great deal of time constructing exquisite mathematical models for projecting what is going to happen in the next 10 or 20 years when, in many cases, projections particularly having to do with graduate education and what happens to jobs or what happens to students and so on have not been really terrific in terms of what actually has happened.

Bennett Bertenthal, NSF: I think the point is very well taken, but I also would like to add that much depends on constructing the right mathematical model. Clearly, any time we are dealing with a biological system, we are not going to have a linear function. That was actually the implicit point of looking at an exponential function that is exploding. It has to be at the very least a logistic function and more importantly, I think what was just pointed out is the tyranny of putting too much emphasis on any one measure. We need to look at multiple measures and how they interact together. We need to recognize that these are extremely complex stories.

As I was listening to our two speakers this morning, I was just running through in my own mind, why some of my students seemed to just cruise through graduate school, while others did not and had multiple problems. I was thinking, could I have predicted that when they began graduate school, and the clear answer is no, I could not have. I still do not understand why some of them had the difficulties they did and others were so incredibly successful. These are extremely difficult questions that cannot be answered either exclusively with quantitative data or just with case studies, so we need to take it down to check all the resources at our disposal.

Paul Seder, National Institutes of Health: I talked recently with an expert on international higher education, Alan Kerckhoff at Duke University. His claim was that we have far too many undergraduates, and that they are much more unfocused in their career and lifetime goals than those in other countries. Then they end up in graduate schools some of them as a default option, and for many of them it is a free good. We happen to have an extremely dynamic economy where there is a lot of mobility, so it is also not a bad thing that people move around. We might also have a baseline of 10 or 20 or more percent of desirable people moving out because they are not qualified or there is not a good match. We are not that good at selecting. There might be an additional 10 to 20 or more percent with life circumstances of perceptions or needs that do in fact change. So if you add those together, we might have attrition of 20, 30, or 40 percent, which would be perfectly acceptable and not a problem.

I have one comment on measurement. I think that rather than jumping in with a huge, expensive study, we should first gather every piece of information that is already out there and then start with small qualitative and quantitative studies. Then we could figure out how to design a larger study properly.

Claudia Mitchell-Kernan, University of California, Los Angeles: I am not sure what an acceptable level of attrition is. I suspect that if it were 10, 12, or 15 percent, I would not have been among those who wanted to see this kind of conference convened. We know that it is really a lot higher than that. We also know is that there is considerable variance across fields. In chemistry and molecular biology attrition may be 10 to 20 percent. In other fields it may be 40 to 50 percent.

When I first took note of these statistics, I wondered whether or not the science departments were attracting much better students and whether what was different among the departments with high and low attrition rates was the quality of students being recruited. As I looked at the most superficial measures of quality and selectivity, what I found was totally counter-intuitive. The students in the humanities and social sciences who did not finish were a more select group than those who were very successful in areas like chemistry and molecular biology. I concluded from that that we needed to take a look at what was going on institutionally.

Janice Madden, University of Pennsylvania: I want to pick up on something that Dr. Bertenthal mentioned: Is there an alternative to picking out who is going to make it and who is not, and making these decisions earlier versus needing this experience to be able to tell this? I have two very excellent graduate programs at the University of Pennsylvania, one of which has virtually no attrition but is almost impossible to get into, so they are very selective at admission. The other program admits lots of students and tests them through the first year, which you might call a hazing process. But they would argue that we are willing to cast a wider net and see who can master the basic skills for the first year. If they cannot master the basic skills, then we cut the students there, and they go on with their lives and we go on with our programs. Neither program from the standpoint of the graduate dean has much problem, in the sense that all of the students and the faculty understand the rules from the very beginning and everybody accepts it.

If I look at assistant professor recruitment, these two departments do the same thing. In one virtually all of the recruits get tenure, and in the other virtually none do. Looking at them objectively, these programs are working with very different attrition rates. I have come down much harder on programs that have high attrition rates at years three and four and very little at the beginning, and try and push them to make these judgments. The real issue is whether or not you can judge what it takes. If you cannot judge what it takes, then we may be better off with much higher attrition rates as people learn and experience and allow everybody the opportunity.

Jeanne Griffith: Before we break, I would like to introduce the moderator for the next session, Alan Tupek, who is the deputy division director of the Division of Science Resources Studies.

<- Workshop Home ->

agenda welcome
qualitative approaches professional future directions

SRS Home