Science and Engineering Indicators has been assessing Americans’ knowledge about science and technology since 1979. Initial questions focused on the proper design of a scientific study and views about whether pseudoscientific belief systems, such as astrology, could be considered scientific. Questions focused on an understanding of probability and an understanding of basic constructs were added in the late 1980s and early 1990s (Miller 2004). These later questions remain the core of the available data on trends in adult Americans’ knowledge of science.
Researchers have questioned both the degree to which scientific literacy has a substantial impact on how people make decisions in their public and private lives (see, for example, NSB 2012:7-27; Bauer, Allum, and Miller 2007) and whether a short battery of questions can assess scientific literacy. Despite the limitations of these indicators, evidence suggests that knowledge about science, as measured by the GSS, has a small but meaningful impact on attitudes and behaviors (Allum et al. 2008). In addition, adult responses to an expanded list of knowledge questions drawn from tests given to students nationwide indicate that people who “answered the additional factual questions accurately also tended to provide correct answers to the trend factual knowledge questions” included in the GSS (NSB 2010:7-20). This finding suggests that the trend questions used in this report represent a reasonable indicator of basic science knowledge. At the same time, in light of the limitations of using a small number of questions largely keyed to knowledge taught in school, generalizations about Americans’ knowledge of science should be made cautiously. Toumey et al. (2010) recommended additional research aimed at developing a measure of S&T literacy focused on how people actually use S&T knowledge. Similar challenges confront attempts to study health literacy (Berkman, Davis, and McCormack 2010) and political literacy (Delli Carpini and Keeter 1996). More generally, in developing measures for what is often termed scientific literacy across nations, the Organisation for Economic Co-operation and Development (OECD 2003) emphasizes that scientific literacy is a matter of degree and that people cannot be classified as either literate or not literate. The OECD noted that literacy had several components:
Current thinking about the desired outcomes of science education for all citizens emphasizes the development of a general understanding of important concepts and explanatory frameworks of science, of the methods by which science derives evidence to support claims for its knowledge, and of the strengths and limitations of science in the real world. It values the ability to apply this understanding to real situations involving science in which claims need to be assessed and decisions made...
Scientific literacy is the capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity. (OECD 2003:132–33)
The degree to which respondents demonstrate an understanding of basic scientific terms, concepts, and facts; an ability to comprehend how S&T generates and assesses evidence; and a capacity to distinguish science from pseudoscience are widely used indicators of basic scientific literacy.
The 2012 GSS continues to show that many Americans provide multiple incorrect answers to basic questions about scientific facts and do not apply appropriate reasoning strategies to questions about selected scientific issues. Residents of other countries, including highly developed ones, appear to perform no better, on balance, when asked similar questions.
A primary indicator of public understanding of science in the United States comes from a nine-question index of factual knowledge questions included in the GSS. In 2012, Americans were able to correctly answer an average of 5.8 of the 9 items (65%), which is slightly up from 2010 (5.6 of 9 items, or 63%) (appendix table
The public’s level of factual knowledge about science has not changed much over the past two decades (figure
Factual knowledge of science is strongly related to people’s level of formal schooling and the number of science and mathematics courses completed. For example, those who had not completed high school answered 45% of the nine questions correctly, and those who had completed a bachelor’s degree answered 78% of the questions correctly. The average percentage correct rose to 83% among those who had taken three or more science and mathematics courses in college (figure
Factual knowledge about science is also associated with sex of the respondent. On average, men tend to answer more factual science knowledge questions correctly (70% correct) than do women (60% correct) (figure
The GSS survey includes two additional true-or-false science questions that are not included in the index calculation because Americans’ responses appear to reflect factors beyond unfamiliarity with basic elements of science. One of these questions addresses evolution, and the other addresses the origins of the universe. To better understand Americans’ responses, the 2012 GSS replicated an experiment first conducted in 2004 (NSB 2006). Half of the survey respondents were randomly assigned to receive questions focused on information about the natural world (“human beings, as we know them today, developed from earlier species of animals” and “the universe began with a big explosion”). The other half were asked the questions with a preface that focused on conclusions that the scientific community has drawn about the natural world (“according to the theory of evolution, human beings, as we know them today, developed from earlier species of animals” and “according to astronomers, the universe began with a big explosion”).
In 2012, respondents were much more likely to answer both questions correctly if the questions were framed as being about scientific theories or ideas rather than about natural world facts. For evolution, 48% of Americans answered “true” when presented with the statement that human beings evolved from earlier species with no preface, whereas 72% of those who received the preface said “true,” a 24 percentage point difference. These results replicate the pattern from 2004, when the percentage answering “true” went from 42% to 74%, a 32 percentage point difference (NSB 2008). For the big bang question, the pattern was very similar: in 2012, 39% of Americans answered “true” when presented with the statement about the origin of the universe without the preface, whereas 60% of those who heard the statement with the preface answered “true.” This represents a 21 percentage point difference. The 2004 experiment found that including the preface increased the percentage who answered correctly from 33% to 62%, a 29 percentage point difference (NSB 2008). Residents of other countries have been more likely than Americans to answer “true” to the evolution question.
Researchers in a range of countries have asked adults in their countries identical or substantially similar questions to test their factual knowledge of science in past years. Knowledge scores for individual items vary from country to country, and no country consistently outperforms the others. For the physical science and biological science questions, knowledge scores are relatively low in China, Russia, and Malaysia. Compared with scores in the United States and the EU overall, scores in Japan are also relatively low for several questions (table
Science knowledge scores have also varied across Europe, with northern European countries, led by Sweden, scoring the highest on a set of 13 questions. For a smaller set of four questions, administered in 12 European countries in 1992 and 2005, each country performed better in 2005. In contrast, U.S. data on science knowledge did not show upward trends over the same period. In Europe, as in the United States, men, younger adults, and more highly educated people tend to score higher on these questions (NSB 2008).
The 2011 BBVA Foundation survey of 10 European countries and the United States included a set of 22 knowledge questions that were mostly different from those that have traditionally been included in Indicators. On average, the United States—with a mean score of 14.3 correct answers—performed similarly to many of the European countries surveyed, with a score close to the European average (13.4). The highest scoring countries were Denmark (15.6) and the Netherlands (15.3). Germany (14.8), the Czech Republic (14.6), Austria (14.2), the United Kingdom (14.1), and France (13.8) all had scores similar to those of the United States.
There were some questions on which Europeans, however, did much better than Americans. For example, for the statement, “the earliest humans lived at the same time as the dinosaurs,” about 43% of Americans correctly answered “false,” whereas 61% of Europeans in the 10 countries surveyed gave the correct response. Another question on which Americans did substantially worse focused on nuclear energy. About 47% of Americans correctly indicated that the “greenhouse effect” is not caused by the use of nuclear energy, in comparison to 58% of Europeans. Conversely, there were several questions on which Americans did substantially better (BBVA Foundation 2012a).
Little international polling is done on the question of evolution or the big bang. However, residents of other countries have typically been more likely than Americans to say they believe that “human beings, as we know them today, developed from an earlier species of animals.” For example, 70% of European respondents in 2005 (NSB 2006) and 76% of Japanese respondents in 2011 (NISTEP 2012) gave this response.
Another indicator of public understanding of science focuses on understanding of how science generates and assesses evidence, rather than knowledge of particular facts. Such measures reflect recognition that knowledge of specific S&T facts is conceptually different from knowledge about the overall scientific processes (Miller 1998).
Data on three general topics—probability, experimental design, and the scientific method—show trends in Americans’ understanding of the process of scientific inquiry. One set of questions tests how well respondents apply the principles of probabilistic reasoning to a series of questions about a couple whose children have a 1 in 4 chance of suffering from an inherited disease. A second set of questions deals with the logic of experimental design, asking respondents about the best way to design a test of a new drug for high blood pressure. A third, open-ended question probes what respondents think it means to “study something scientifically.” Because probability, experimental design, and the scientific method are all central to scientific research, these questions are relevant to how respondents evaluate scientific evidence. These measures are reviewed separately and then as a combined indicator of public understanding about scientific inquiry.
With regard to probability, 82% of Americans in 2012 correctly indicated that the fact that a couple’s first child has the illness has no relationship to whether three future children will have the illness. About 72% of Americans correctly responded that the odds of a genetic illness are equal for all of a couple’s children. Overall, 65% got both probability questions correct. Understanding of probability has been fairly stable over time, with the percentage giving both correct responses ranging from 64% to 69% since 1999 and going no lower than 61% dating back to 1990 (table
With regard to understanding experiments, one-third (34%) of Americans were able to answer a question about how to test a drug and then provide a correct response to an open-ended question that required them to explain the rationale for an experimental design (i.e., giving 500 people a drug while not giving the drug to 500 additional people as a control group). A smaller percentage of people were able to answer this set of questions in 2012 than were in 2010, when 51% answered correctly (table
The percentage of people the 2012 GSS judged as understanding what it means to study something scientifically was more consistent with previous surveys. About 20% of Americans were scored as correctly answering the GSS question on this topic. When describing the scientific method, these respondents mentioned that it involves at least one of the following: testing a theory using hypotheses, conducting an experiment with a control group, or making rigorous and systematic comparisons. The percentage of Americans providing at least one of these acceptable answers has declined somewhat from a high of 26% in 2001, although the 2012 result is similar to percentages in recent years.
Overall, when these questions are combined into an overall measure of “understanding of scientific inquiry,” the 2012 results are relatively low compared with those from other years. About 33% of Americans could both correctly respond to the two questions about probability and provide a correct response to at least one of the open-ended questions about experimental design or what it means to study something scientifically. The 2010 survey represents a high point (42%), and the current result is closest to scores seen in the late 1990s but lower than scores in the other surveys conducted since 2001 (table
The 2011 BBVA Foundation survey of 10 European countries and the United States included the standard question about probability in the context of genetic disease. In this instance, 61% of Americans could correctly indicate that a child’s susceptibility to a genetic disease was unaffected by whether the child’s siblings suffered from the disease. This percentage is substantially lower than the 82% found in the 2012 GSS (see previous section). The 10-country European average was 49%, but residents of both Denmark (81%) and the Netherlands (79%) did better on this question than Americans. UK residents (60%) had a score nearly identical to that of U.S. residents (BBVA Foundation 2012a).
Recent surveys from Asia also touch on reasoning and understanding. A 2010 Chinese survey reported that 49% understood the idea of probability, 20% understood the need for comparisons in research, and 31% understood the idea of “scientific research” (CRISP 2010). The exact wording of the questions used was not available, but given that much of the survey replicated past U.S. questions reported in Science and Engineering Indicators, it seems likely that these questions were similar to those asked in the United States. In a July 2011 Japanese survey, 62% correctly answered a multiple choice question about the use of control groups in research experiments, whereas 57% answered correctly in a follow-up December 2011 survey (NISTEP 2012). A Korean survey used self-report measures of knowledge. Koreans were most likely to say they knew “well” or “very well” about diseases (54%) and least likely to say they knew about nanotechnology (14%). Koreans were also unlikely to say they knew about stem cell research (15%) and genetic modification (20%) (KOFAC 2011).
The 2008 GSS included several additional questions on the scientific process that also indicated that many Americans lack an understanding of experimental design. Between 29% and 57% of Americans responded correctly to various questions measuring the concepts of scientific experiment and controlling variables. Only 12% of Americans responded correctly to all the questions on this topic, and nearly 20% did not respond correctly to any of them (NSB 2010). These data raise further questions about how well Americans can reliably apply a generalized understanding of experimental design across different situations. Responses to these questions also allowed a comparison between adults’ understanding of experimentation and that of middle school students tested on the same questions. On the three experimental knowledge questions in which direct comparison is possible, adults’ scores were similar to a national sample of middle school students on one question but were lower on two others (NSB 2010).
Another indicator of public understanding about S&T comes from a measure focused on the public’s capacity to distinguish science from pseudoscience. Since 1979, surveys have asked Americans whether they view astrology as being scientific. In 2012, about half of Americans (55%) said astrology is “not at all scientific.” One-third (32%) said they thought astrology was “sort of scientific,” and 10% said it was “very scientific.” About 4% said they did not know. In comparison, in 2010, 62% of Americans said that astrology was not scientific, and this percentage has hovered between 55% (2012) and 66% (2004) since 1985. The only years when a smaller percentage of respondents said that astrology was not at all scientific were in 1979, when 50% gave this response, and in 1983, when 51% gave this response.
Respondents with more years of formal education and higher income were less likely to see astrology as scientific. For example, in 2012, 72% of those with graduate degrees indicated that astrology is “not at all scientific,” compared with 34% of those who did not graduate from high school. Between 2010 and 2012, responses to the astrology question changed more among Americans with less education and factual knowledge than among other Americans. For example, in 2010, 79% of those high in factual knowledge said astrology was “not at all scientific,” which was only 5% more than the 74% who gave this response in 2012. In contrast, 52% of those with the lowest factual knowledge said astrology was unscientific in 2010 compared with 35% in 2012, which is a 17% change.
Age was also related to perceptions of astrology. Younger respondents, in particular, were the least likely to regard astrology as unscientific, with 42% of the youngest age group (18–24) saying that astrology is “not at all scientific.” The largest change, however, occurred in the 35–44 age group. In 2010, 64% of respondents in this group said that astrology was not scientific, whereas 51% gave this response in 2012, which is a 13% change (appendix table
A 2010 Chinese survey had multiple questions about superstition. It found that 80% of respondents did not believe in “fortune telling sticks,” 82% did not believe in face reading, 87% did not believe in dream interpretation, 92% did not believe in horoscopes, and 95% did not believe in “computer fortune telling” (CRISP 2010).
Along with actual knowledge, perceived knowledge may also affect individuals’ attitudes and behaviors (Ladwig et al. 2012; Griffin, Dunwoody, and Yang 2013). The 2010 GSS included two questions about how much Americans believed they personally knew about the causes of and solutions to environmental problems. These questions used a 5-point scale that went from “1” for “know nothing at all” to “5” for “know a great deal.” About 27% of Americans chose a “4” or “5” when asked to assess their knowledge of the causes of environmental problems, and 14% chose “4” or “5” to describe their knowledge of environmental solutions (figure
The 2010 International Social Survey Programme (ISSP) allows for international comparisons of perceived science knowledge. The 2010 ISSP asked questions in 31 countries, including the United States, about perceived knowledge bearing on environmental issues. The results show that residents of most other countries surveyed expressed more confidence than Americans about their knowledge of the causes of and solutions to environmental problems. The country with the highest percentage of survey takers choosing “4” or “5” on the 5-point scale for perceived knowledge of the causes of environmental problems was Norway (50%). The United States (27%) had a much lower percentage, although its percentage was similar to that of many other countries. Only Slovak Republic respondents reported less knowledge, on average, than U.S. respondents about causes of environmental problems. Residents of more than half of the countries surveyed gave responses that suggested they knew more. On the subject of environmental solutions, the top countries saw about one-third of residents saying they understood the solutions to environmental problems. The United States (14%) was among the countries with the lowest percentages of residents who said they understood the solutions to environmental problems. Only the Russians (13%) reported less knowledge, on average, than the Americans about environmental solutions. It is also noteworthy that no country’s citizens thought they knew more about solutions than causes but that the difference in mean scores for the two questions was almost always less than half a point on the 5-point scale used by the ISSP (figure