Public Knowledge about S&T

Science and Engineering Indicators has been reporting results of assessments of Americans’ knowledge about S&T since 1979. Initial indicators focused on the proper design of a scientific study and whether respondents viewed pseudoscientific belief systems, such as astrology, as scientific. The questions also examined understanding of probability, and questions meant to assess understanding of basic scientific facts were added in the late 1980s and early 1990s (Miller 2004). These later factual questions—called here the trend factual knowledge questions—remain the core of one of the only available data sets on trends in adult Americans’ knowledge of science (NASEM 2016c).

Although tracking indicators on science knowledge is an important part of this chapter, it is also important to recognize that research has shown that science literacy only has a small—though meaningful—impact on how people make decisions in their public and private lives (see, e.g., [Allum et al. 2008]; [Bauer, Allum, and Miller 2007]; [NASEM 2016c]; [NSB 2012:7–27]). It is also, however, clear that such knowledge need not result in accepting the existence of a scientific consensus or a policy position that such a consensus might suggest (Kahan et al. 2012). One challenge of measuring the effect of science literacy is that processes—such as formal and informal education—through which knowledge is gained also contribute to interest in S&T and confidence in the S&T community. These same processes might also affect general and specific attitudes about science. The National Academies of Sciences, Engineering, and Medicine also recently highlighted that science literacy is largely a function of general (or “foundational”) literacy and that more focus should be put on the ability of groups to use science to make high-quality decisions (NASEM 2016c). In this regard, it should be recognized that the science literacy of individuals is unequally distributed across societies, so that some groups or communities are able to make use of science when needed while others are not because they may not have access to resources such as local expertise (e.g., community members who are also scientists, engineers, doctors).

It is also noteworthy that the current survey uses a relatively small number of questions compared to all the scientific subjects about which someone could be asked and thus cannot be said to represent a deep measurement of scientific knowledge. Given such concerns, the 2010 version of Indicators included responses to an expanded list of knowledge questions and found that people who “answered the additional factual questions accurately also tended to provide correct answers to the trend factual knowledge questions included in the GSS” (NSB 2010:7-20). The trend questions used in this report thus likely represent a reasonable indicator of basic science knowledge. The goal when designing these questions was to assess whether an individual likely possessed the knowledge that might be needed to understand a quality newspaper’s science section (Miller 2004).

There is, however, evidence that the current trend measures may be better at differentiating low and medium levels of knowledge than they are at differentiating those with higher levels of knowledge (Kahan 2016). More generally, considering the limitations of using a small number of questions largely keyed to knowledge taught in school, generalizations about Americans’ knowledge of science should be made cautiously.

Another issue is that, although the focus in Indicators is on assessing knowledge about scientific facts and processes, it could also be important to assess knowledge about the institutions of science and how they work—such as peer review and the role of science in policy discussions (Toumey et al. 2010). Others have similarly argued that the knowledge needed for citizenship might be different from what might be needed to be an informed consumer or to understand the role of science in our culture (Shen 1975). Science literacy can also be understood as the capacity to use scientific knowledge, to identify questions, and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity (OECD 2003:132–33).

The degree to which respondents demonstrate an understanding of basic scientific terms, concepts, and facts; an ability to comprehend how S&T generates and assesses evidence; and a capacity to distinguish science from pseudoscience have become widely used indicators of basic science literacy. The 2016 GSS continues to show that many Americans provide multiple incorrect answers to basic questions about scientific facts and do not apply appropriate reasoning strategies to questions about selected scientific issues. Residents of other countries, including highly developed ones, rarely appear to perform better when asked similar questions.

Understanding Scientific Terms and Concepts

Evolution and the Big Bang

The GSS includes two additional true-or-false science questions that are not included in the index calculation because Americans’ responses appear to reflect factors beyond familiarity with scientific facts. One of these questions is about evolution, and the other is about the origins of the universe. In 2016, 52% of Americans correctly indicated that “human beings, as we know them today, developed from earlier species of animals,” and 39% correctly indicated that “the universe began with a big explosion” (Appendix Table 7-10). Both scores are relatively low compared with scores on the other knowledge questions in the survey. The percentage of Americans answering the evolution question has risen from a low of 42% in 2004, while the origins of the universe question is similar to where it has been since 2010 (38%) but is higher than it was during much of the last two decades—it was at lows of 32% in 1990 and 1997 (Appendix Table 7-9).

Those with more education and more factual knowledge typically do well on the two questions. Younger respondents are also more likely to answer both questions correctly. For example, 70% of those ages 18–24 years answered the evolution question correctly, whereas 45% of those 65 or older answered the evolution question correctly. This pattern is not as pronounced for the other knowledge questions described above (Appendix Table 7-10).

An additional question-wording experiment was included in the 2016 GSS to expand on similar experiments conducted in 2004 (NSB 2006) and 2012 (NSB 2014, 2016). These experiments involve randomly giving each survey respondent one of two or three different survey questions and then comparing the results. The earlier experiments showed that changing the wording to the evolution and origin of the universe questions substantially increased the percentage of respondents getting them correct. For example, in 2012, 48% of those asked whether it was true or false that “human beings, as we know them today, developed from earlier species of animals” gave the correct answer of true, but 72% answered the question correctly when presented with the same statement with the addition of the preface “According to the theory of evolution.” Similarly, 39% of respondents correctly stated it was true that “the universe began with a big explosion,” but 60% gave the correct answer when presented with the same statement prefaced by “According to astronomers” (Appendix Table 7-9).

Similar patterns were evident in the 2016 version of the experiments. For evolution, 74% gave the correct response to the evolution question when respondents were asked whether it was true or false that “elephants, as we know them today, descended from earlier species of animals” (for a discussion of this question, see [Maitland, Tourangeau, and Yan 2014] and [Maitland, Tourangeau, Yan, Bell, et al. 2014]). This is 22 percentage points higher than the 52% who gave the correct answer when asked the similar question about humans. For the Big Bang question, 69% gave the correct response when the preface “According to astronomers” was added to the original question, and 64% gave the correct response when asked whether it was true or false that “the universe has been expanding ever since it began.” These represent differences of 30 percentage points and 25 percentage points, respectively, from the 39% of respondents who gave the correct response when asked the original question of whether the universe began with a huge explosion. As before, the results suggest that the evolution and origin of the universe items, as originally worded, may lead some people to provide incorrect responses based on factors other than their knowledge of what most scientists believe. While issues of personal identity are not the focus of Indicators, other research has pointed to the important role that religious beliefs play in shaping views about evolution and the origins of the universe (e.g., [Roos 2014]). While issues of personal identity are not the focus of Indicators, research has pointed to the role that religious beliefs play in shaping views about evolution and the origins of the universe (e.g., [Roos 2014]). For additional findings related to these questions, see sidebar Testing Alternative Wording of the Big Bang and Evolution Questions.

Testing Alternative Wording of the Big Bang and Evolution Questions

Race, Ethnicity, and Factual Science Knowledge

International Comparisons

There are very few current international efforts to measure science knowledge in the way that this is done in the United States because scholarly attention has shifted to understanding attitudes about science and scientists. This has likely occurred because of the aforementioned evidence that science knowledge is only weakly related to science attitudes (Bauer, Allum, and Miller 2007), including support of science (NASEM 2016c). Most data now available are thus somewhat dated.

Knowledge scores for individual items vary from country to country, and it is rare for one country to consistently outperform others across all items in a given year (Table 7-1). One exception is a 2013 Canadian survey that has Canadians scoring as well as or better than Americans and residents of most other countries on the core science questions (CCA 2014). For the physical and biological science questions, knowledge scores are relatively low in China, Russia, and Malaysia (CRISP 2016; Gokhberg and Shuvalova 2004; MASTIC 2010). Compared with overall scores in the United States and the European Union (EU) (European Commission 2005), scores in Japan (NISTEP 2012) are also relatively low for several questions.

Scores on a smaller set of four questions administered in 12 European countries in 1992 and 2005 show each country performing better in 2005 (European Commission 2005), in contrast to a flat trend in corresponding U.S. data. In Europe, as in the United States, men, younger adults, and more educated people tended to score higher on these questions.

Percentage of correct answers to factual knowledge questions in physical and biological sciences, by region or country: Most recent year

Reasoning and Understanding the Scientific Process

International Comparisons

Reasoning and understanding have not been the focus of surveys in most other countries in recent years. A 2010 Chinese survey reported that 49% understood the idea of probability, 20% understood the need for comparisons in research, and 31% understood the idea of scientific research (CRISP 2010). In a July 2011 Japanese survey, 62% correctly answered a multiple-choice question on experiments related to the use of a control group, whereas 57% answered correctly in a follow-up December 2011 survey (NISTEP 2012). As noted previously, 66% of Americans provided a correct response to a similar question in 2014.


Another indicator of public understanding about S&T comes from a measure focused on the public’s capacity to distinguish science from pseudoscience. One such measure, Americans’ views on whether astrology is scientific, has been included in Indicators because of the availability of data going back to the late 1970s. Other examples of pseudoscience include the belief in lucky numbers, extrasensory perception, or magnetic therapy.

More Americans see astrology as unscientific today than in the past, although there has been some variation in recent years. In 2016, about 60% of Americans said astrology is “not at all scientific,” a value near the middle of the historical range and down somewhat from 65% in 2014. Twenty-nine percent said they thought astrology was “sort of scientific,” and the remainder said they thought astrology was “very scientific” (8%) or that they “didn’t know how scientific” astrology is (3%). The percentage of Americans who report seeing astrology as unscientific has ranged between 50% (1979) and 66% (2004).

Respondents with more years of formal education and higher income were less likely to see astrology as scientific. For example, in 2016, 76% of those with bachelor’s degrees indicated that astrology is “not at all scientific,” compared with 57% of those whose highest level of education was high school. Age was also related to perceptions of astrology. Younger respondents were the least likely to reject astrology, with only 54% of the youngest age group (18–24 years old) and 53% of the next group (25–34 years old) saying that astrology is “not at all scientific.” At least 60% of all other groups rejected astrology (Appendix Table 7-12).

Perceived Understanding of Scientific Research

International Comparisons

Only a small number of countries ask about their residents’ perceived understanding of science. In Switzerland, about 28% agreed with a statement about being well-informed about science and research by choosing 4 or 5 on a 5-point scale, where 1 indicated complete disapproval of the statement and 5 indicated complete approval (Schafer and Metag 2016). This is similar to the 31% who expressed “clear understanding” in the United States.