Skip all navigation and go to page content

Chapter 7. Science and Technology: Public Attitudes and Understanding

Public Knowledge About S&T

Knowledge and understanding of S&T can be relevant to public policy and the personal choices that people make. In developing measures for what is often termed scientific literacy across nations, the Organisation for Economic Co-operation and Development (OECD 2003) emphasizes that scientific literacy is a matter of degree and that people cannot be classified as either literate or not literate. The OECD noted that literacy had several components:

Current thinking about the desired outcomes of science education for all citizens emphasizes the development of a general understanding of important concepts and explanatory frameworks of science, of the methods by which science derives evidence to support claims for its knowledge, and of the strengths and limitations of science in the real world. It values the ability to apply this understanding to real situations involving science in which claims need to be assessed and decisions made…

Scientific literacy is the capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity. (pp. 132–33)

A good understanding of basic scientific terms, concepts, and facts; an ability to comprehend how S&T generates and assesses evidence; and a capacity to distinguish science from pseudoscience are widely used indicators of scientific literacy. U.S. survey data indicate that many Americans provide multiple incorrect answers to basic questions about scientific facts and do not apply appropriate reasoning strategies to questions about selected scientific issues. Residents of other countries, including highly developed ones, appear to perform no better, on balance, when asked similar questions. However, in light of the limitations of using a small number of questions largely keyed to knowledge taught in school, generalizations about Americans' knowledge of science should be made cautiously.

Understanding Scientific Terms and Concepts

U.S. Patterns and Trends

One common indicator of public understanding about science comes from an index of factual science knowledge questions covering a range of science disciplines. Responses to nine questions are used in a combined scale as an indicator of general knowledge about S&T. In 2010, Americans, on average, were able to correctly answer 5.6 out of the 9 items, for an average percent correct of 63%.

The public's level of factual knowledge about science has not changed much over the past two decades (figure 7-7). Since 2001, the average number of correct answers to a series of mostly true-false science questions in years for which fully comparable data were collected has ranged from 5.6 correct responses to 5.8 correct responses, although knowledge on individual questions has varied somewhat over time (appendix tables 7-8 and 7-9).[20] (Also see sidebar, "Measuring Factual Science Knowledge Over Time.")

Some individuals know more about science than others, of course. Factual knowledge of science is strongly related to people's level of formal schooling and the number of science and mathematics courses completed. Among those who have no more than a high school education, 49% of the questions were answered correctly, on average. Individuals who had attended college answered more items correctly; the average percent correct rose to 81% among those who had taken three or more science and mathematics courses in college (figure 7-8; appendix table 7-8).

Respondents age 65 and older are less likely than younger Americans to answer the factual science questions correctly (appendix table 7-8). Younger generations have had more formal education, on average, than Americans coming into adulthood some 50 years ago; these long-term societal changes make it difficult to know whether the association between age and factual knowledge is due primarily to aging processes, cohort differences in education, or other factors. An analysis of surveys conducted between 1979 and 2006 concluded that public understanding of science has increased over time and by generation, even after controlling for formal education levels (Losh 2009, 2011). (Also see Bauer 2009.)

Factual knowledge about science is also associated with sex. Men tend to answer more factual science knowledge questions correctly than do women. However, this pattern depends on the science domain referenced in the question. In the factual questions included in NSF surveys since 1979, men score higher than women on questions in the physical sciences, but not on questions in the biological sciences. Women tend to score at least equally as high as men on the biological science questions and often a bit higher (table 7-7).

Comparisons of Adult and K–12 Student Knowledge

The factual knowledge questions that have been repeatedly asked in U.S. surveys involve information that was being taught in grades K–12 when most respondents were young. Because science continually generates new knowledge that reshapes how people understand the world, scientific literacy requires lifelong learning so that citizens become familiar with terms, concepts, and facts that emerge after they complete their schooling.

The 2008 GSS included several different kinds of factual science knowledge questions; seven of those questions can be directly compared with national student assessments of science knowledge. Adult Americans received a higher or similar score to fourth and eighth grade students in five of the seven factual science knowledge questions where comparisons scores were possible (table 7-8).

Comparisons should be made cautiously because of the differences in circumstances in which students and adults responded to these science knowledge questions. Students' tests were self-administered on paper, whereas the majority of respondents in the GSS answered orally to questions asked by an interviewer. Also, elementary and middle school students had an advantage over adults in that classroom preparation preceded their tests. (For more details, see NSB 2010.)

Knowledge About Nanotechnology and the Polar Regions

New developments in S&T are always on the horizon. Indicators of factual science knowledge need to probe knowledge and understanding about newly emerging science topics, as well as more established topics. Recent GSS surveys included indicators of public understanding for one such emerging area––nanotechnology.

A small minority report having heard "a lot" about nanotechnology; 31% of Americans correctly indicate that "nanotechnology involves manipulating extremely small units of matter, such as individual atoms, in order to produce better materials" is true.[21] About two in ten (18%) Americans correctly indicate that "the properties of nanoscale materials often differ fundamentally and unexpectedly from the properties of the same materials at larger scales." (Also see "Public Attitudes About Specific S&T-Related Issues.")

Those who scored higher on the general factual knowledge scale were also more likely to answer the two questions about nanotechnology correctly (figure 7-9).[22] Likewise, the educational and demographic characteristics associated with higher scores on the trend factual knowledge questions are also associated with higher knowledge of nanotechnology (appendix table 7-11). These data suggest that the trend factual knowledge scale, although focused on the kind of scientific facts and principles learned in school, is a reasonable indicator of factual science knowledge in general, including knowledge on newly emerging topics acquired later in life.

The 2006 and 2010 GSSs included a series of knowledge questions about the polar regions. Knowledge about the polar regions was measured using a 4-item scale of true-false questions. In 2010, Americans answered 60% of the four items correctly, on average, up from 55% in 2006. Increased knowledge about the polar region was indicated especially by two of the four questions: "The North Pole is on a sheet of ice that floats on the Arctic Ocean" (from 41% in 2006 to 48% in 2010), and "Hunting is more likely than climate change to make polar bears become extinct" (from 36% in 2006 to 44% in 2010) (appendix table 7-12). It is possible that this increase in knowledge stems, in part, from increased attention to the polar regions during the 2007–2008 International Polar Year. However, there may be other reasons for the change including increased public attention to global climate change and its implications for the polar regions.

International Comparisons on Factual Knowledge Questions

Adults in different countries and regions have been asked identical or substantially similar questions to test their factual knowledge of science. Knowledge scores for individual items vary from country to country, and no country consistently outperforms the others. For the physical science and biological science questions reported in table 7-9, knowledge scores are relatively low in China, Russia, and Malaysia. Compared to the United States and the EU, scores in Japan are also relatively low.[23]

Science knowledge scores vary considerably across Europe, with northern European countries, led by Sweden, scoring the highest on a set of 13 questions. For a smaller set of 4 questions that were administered in 12 European countries in 1992 and 2005, each country performed better in 2005. In contrast, U.S. data on science knowledge do not show upward trends over the same period. In Europe, as in the United States, men, younger adults, and more highly educated people tend to score higher on these questions.

Reasoning and Understanding the Scientific Process

Another indicator of public understanding of science focuses on understanding of how S&T generates and assesses scientific evidence, rather than knowledge of particular facts. Past NSF surveys have used questions on three general topics—probability, experimental design, and the scientific method—to assess trends in Americans' understanding of the process of scientific inquiry. One set of questions tests how well respondents apply the principles of probabilistic reasoning to a series of questions about a couple whose children have a one in four chance of suffering from an inherited disease.[24] A second set of questions deals with the logic of experimental design, asking respondents about the best way to design a test of a new drug for high blood pressure. A third, open-ended question probes what respondents think it means to "study something scientifically." Because probability, experimental design, and the scientific method are all central to scientific research, these questions are relevant to how respondents evaluate scientific evidence. These measures are reviewed separately and then as a combined indicator of public understanding about scientific inquiry (table 7-10; appendix table 7-13).

In 2010, two-thirds of Americans correctly responded to two questions about probability of a child's genetic inheritance of illness. Understanding of probability has been fairly stable over time, with the percentage giving a correct response ranging from 64% to 69% since 1999. About half (51%) of Americans correctly identified the concept of using an experimental design or control group in the context of a medical study in 2010. This represents a marked increase in understanding from 38% in 2008 (table 7-10; appendix table 7-13).[25] Understanding of what it means to study something scientifically is considerably lower, at 18% in 2010. Correct responses on this question are lower, in part, because the task of expressing a concept in one's own words is more difficult than recognizing a correct response to a multiple-choice style closed-ended survey question. Correct responses on these questions have ranged from a low of 18% in 2010 to a high of 26% in 2001.

Taken together, 42% of Americans exhibit an understanding of scientific inquiry in 2010, up from 36% in 2008.[26] As was found for factual science knowledge, public understanding of scientific inquiry is strongly associated with people's level of formal schooling and the number of science and mathematics courses completed. Among those who have no more than a high school education, 23% are able to provide a correct response on the measure of understanding scientific inquiry. Understanding of scientific inquiry is somewhat higher among college attendees who did not take college-level science or mathematics courses. However, it is notably higher (71% correct) among individuals who completed at least three science and mathematics courses in college (figure 7-10; appendix table 7-14).

Americans age 65 and older score lower than younger adults on the scientific process measures. The differences are greatest on understanding of an experimental or control group design and on the open-ended questions about the meaning of scientific study. These differences may be related to the lower levels of formal education among older generations in the United States. The same pattern was found for factual science knowledge.

Unlike the patterns found on factual knowledge, particularly on facts related to the physical sciences, men and women obtain similar scores on understanding of scientific inquiry (figure 7-10; appendix table 7-14).

Comparisons of Adult and K-12 Student Understanding

The 2008 GSS included several additional questions on the scientific process that provide an opportunity to examine Americans' understanding of experimental design in more detail. From 29% to 57% of Americans responded correctly to questions measuring the concepts of scientific experiment and controlling variables, only 12% responded correctly to all the questions on this topic, and nearly 20% of Americans did not respond correctly to any of them (appendix table 7-15). These data raise questions about how well Americans can reliably apply a generalized understanding of experimental design across different situations.

These questions allow a comparison between adults' understanding of experimentation and that of middle school students tested on the exact same questions. Out of the three experimental knowledge questions where direct comparison is possible, adults' scores are similar to a national sample of middle school students on one question, but lower on two others (appendix table 7-16).

Other Indicators of Public Knowledge and Understanding About S&T

The trend factual knowledge and process understanding questions are both indicators used to gauge public knowledge and understanding about S&T over time. These are but two of the potential indicators that might be useful, however (Miller 1998). A handful of other approaches have been used in recent years. These are reviewed briefly below. One provides an alternative measure of factual public knowledge about science that is rooted in national standards for what students are expected to know about science. Other approaches include indicators of understanding about statistics and the interpretation of charts, as well as indicators of the ability to distinguish between science and pseudoscience. Taken together, these approaches provide a more complete portrait of public understanding about S&T. Other approaches are currently being developed that seek to add indicators of the understanding of science as it applies to everyday life and measure public understanding of institutions and how they influence the development of S&T. (See sidebar, "Public Understanding of Science and Its Role in Everyday Life.")

National Standards and Applying Science Knowledge to Specific Problems

Recently devised measures developed in light of national standards for what students should know about scientific topics provide additional information about public knowledge and understanding. These standards go beyond the factual knowledge questions that have been used to measure trends in public knowledge of science on NSF surveys since 1979 and often include the ability to apply science knowledge to specific problems. Questions of this kind were administered as part of the 2008 GSS and were reported in NSB 2010. The 2008 GSS questions were selected from Project 2061, an initiative by the American Association for the Advancement of Science (AAAS) that develops assessment materials aligned with current curricular standards, and from three national exams administered to students.[27] The series of questions included nine factual questions, two questions that measured chart reading and the statistical concept of a "mean," and five questions that tested reasoning and understanding of the scientific process. Two of the 16 questions were open-ended and the rest were multiple-choice. (For details on the measures, see appendix table 7-17.[28])

Respondents who answered these additional factual knowledge questions correctly (on the "scale 2" index reflecting national standards) also tended to answer the trend factual knowledge questions correctly. This suggests that the trend factual knowledge questions are a reasonable indicator of the type of knowledge students are tested on in national assessments (appendix table 7-18).

Understanding of Statistics and Charts

Americans encounter basic statistics and charts in everyday life. Many media reports cite studies in health, social, economic, and political trends. Understanding statistical concepts is important to understanding the meaning of these studies and, consequently, to scientific literacy (Crettaz von Roten 2006). One test of these concepts included on the 2008 GSS found that 74% of Americans could read a simple chart correctly and 66% understood the concept of "mean" in statistics. Understanding these two concepts was associated with both formal education and the number of math and science courses taken. Older respondents were less likely than younger adults to respond correctly to these two questions. Men and women were about equally likely to answer these questions correctly (appendix table 7-15).


Another indicator of public understanding about S&T comes from measuring the public's capacity to distinguish science from pseudoscience. One such indicator, on astrology, is available over time on the NSF surveys conducted since 1979. Recent surveys show a downward trend toward fewer Americans considering astrology as scientific. In the 2010 GSS, 62% of Americans indicated that they believe that astrology is "not at all scientific," 28% said that it is "sort of scientific," and just 6% considered it "very scientific." Respondents with more years of formal education were less likely to perceive astrology to be at all scientific. In 2010, 78% of college graduates indicated that astrology is "not at all scientific," compared with 58% of high school graduates. Those who scored highest on the factual knowledge measures were less likely to perceive astrology to be at all scientific (79%) than those who scored lowest (52%). Respondents who correctly understood the concept of scientific inquiry were more likely to say that astrology is "not at all scientific" (73%) than those who did not understand the concept (54%). However, the youngest age group (18–24) was less likely to say astrology is "not at all scientific" (46%) and more likely to say it is "very" or "sort of scientific" (54%) (appendix table 7-19).[29]


[20] Survey items that test factual knowledge sometimes use easily comprehensible language at the cost of scientific precision. This may prompt some highly knowledgeable respondents to feel that the items blur or neglect important distinctions, and in a few cases may lead respondents to answer questions incorrectly. In addition, the items do not reflect the ways that established scientific knowledge evolves as scientists accumulate new evidence. Although the text of the factual knowledge questions may suggest a fixed body of knowledge, it is more accurate to see scientists as making continual, often subtle modifications in how they understand existing data in light of new evidence.
[21] Respondents who say they know "nothing at all" about nanotechnology were not asked the two knowledge questions about this topic; they are classified as holding incorrect responses to both questions.
[22] The two nanotechnology questions were asked only of respondents who said they had some familiarity with nanotechnology, and a sizable majority of the respondents who ventured a response different from "don't know" answered the questions correctly. To measure nanotechnology knowledge more reliably, researchers would prefer a scale with more than two questions.
[23] In its own international comparison of scientific literacy, Japan ranked itself 10th among the 14 countries it evaluated (National Institute of Science and Technology Policy 2002).
[24] Early NSF surveys used additional questions to measure understanding of probability. Bann and Schwerin (2004) identified a smaller number of questions that could be administered to develop a comparable indicator. Starting in 2004, the NSF surveys used these questions for the trend factual knowledge scale.
[25] A change of this magnitude in a 2-year period is unusual. Because classification of knowledge on these items includes open-ended questions, it is possible that some of the change could stem from unknown differences in coding practices by the GSS staff over time.
[26] Classification as understanding scientific inquiry is based on providing a correct response to the measure of understanding probability and providing a correct response to either the measure of understanding an experiment or the open-ended measure of understanding a scientific study.
[27] The questions were selected from the Trends in Mathematics and Science Studies (TIMSS), National Assessment of Educational Progress (NAEP), practice General Educational Development (GED) exams, and AAAS Project 2061.
[28] The scoring of the open-ended questions closely followed the scoring of the corresponding test administered to middle-school students.
For the NAEP question, "Lightning and thunder happen at the same time, but you see the lightning before you hear the thunder. Explain why this is so," the question was scored as follows:
  1. Complete: The response provided a correct explanation including the relative speeds at which light and sound travel. For example, "Sound travels much slower than light so you see the light sooner at a distance."
  2. Partial: The response addressed speed and used terminology such as thunder for sound and lightning for light, or made a general statement about speed but did not indicate which is faster. For example, "One goes at the speed of light and the other at the speed of sound."
  3. Unsatisfactory/Incorrect: Any response that did not relate or mention the faster speed of light or its equivalent, the slower speed of sound. For example, "Because the storm was further out," or "Because of static electricity."
For the TIMSS question, "A solution of hydrochloric acid (HCl) in water will turn blue litmus paper red. A solution of the base sodium hydroxide (NaOH) in water will turn red litmus paper blue. If the acid and base solutions are mixed in the right proportion, the resulting solution will cause neither red nor blue litmus paper to change color. Explain why the litmus paper does not change color in the mixed solution," the question was scored as follows:
  1. Correct: The response had to refer to a neutralization or a chemical reaction that results in products that do not react with litmus paper. Three kinds of answers were classified as correct:
    1. The response referred explicitly to the formation of water (and salt) from the neutralization reaction. For example, "Hydrochloric acid and sodium hydroxide will mix together to form water and salt, which is neutral."
    2. The response referred to neutralization (or the equivalent) even if the specific reaction is not mentioned. For example, "The mixed solution is neutral, so litmus paper does not react.")
    3. The response referred to a chemical reaction taking place (implicitly or explicitly) to form products that do not react with litmus paper (or a similar substance), even if neutralization was not explicitly mentioned. For example, "The acid and base react, and the new chemicals do not react with litmus paper."
  2. Partially correct: The response mentioned only that acids and bases are "balanced," "opposites," "cancel each other out," or that it changes to a salt without mentioning the neutralization reaction. These answers suggest that the respondent remembered the concept but the terminology they used was less precise, or that the answer was partial. For example, "They balance each other out."
  3. Incorrect: The response did not mention any of the above in a–c or is too partial or incomplete, and/or uses terminology that is too imprecise. For example, "Because they are base solutions—the two bases mixed together there is no reaction," or "There is no change. Both colors change to the other."
[29] The pseudoscience section focuses on astrology because of the availability of long-term national trend indicators on this subject. Other examples of pseudoscience include the belief in lucky numbers, the existence of unidentified flying objects (UFOs), extrasensory perception (ESP), or magnetic therapy.