Skip all navigation and go to page content.

Chapter 7. Science and Technology: Public Attitudes and Understanding

Public Knowledge About S&T

Science and Engineering Indicators has been assessing Americans’ knowledge about science and technology since 1979. Initial questions focused on the proper design of a scientific study and views about whether pseudoscientific belief systems, such as astrology, could be considered scientific. Questions focused on an understanding of probability and an understanding of basic constructs were added in the late 1980s and early 1990s (Miller 2004). These later questions remain the core of the available data on trends in adult Americans’ knowledge of science.

Researchers have questioned both the degree to which scientific literacy has a substantial impact on how people make decisions in their public and private lives (see, for example, NSB 2012:7-27; Bauer, Allum, and Miller 2007) and whether a short battery of questions can assess scientific literacy. Despite the limitations of these indicators, evidence suggests that knowledge about science, as measured by the GSS, has a small but meaningful impact on attitudes and behaviors (Allum et al. 2008). In addition, adult responses to an expanded list of knowledge questions drawn from tests given to students nationwide indicate that people who “answered the additional factual questions accurately also tended to provide correct answers to the trend factual knowledge questions” included in the GSS (NSB 2010:7-20). This finding suggests that the trend questions used in this report represent a reasonable indicator of basic science knowledge. At the same time, in light of the limitations of using a small number of questions largely keyed to knowledge taught in school, generalizations about Americans’ knowledge of science should be made cautiously. Toumey et al. (2010) recommended additional research aimed at developing a measure of S&T literacy focused on how people actually use S&T knowledge. Similar challenges confront attempts to study health literacy (Berkman, Davis, and McCormack 2010) and political literacy (Delli Carpini and Keeter 1996). More generally, in developing measures for what is often termed scientific literacy across nations, the Organisation for Economic Co-operation and Development (OECD 2003) emphasizes that scientific literacy is a matter of degree and that people cannot be classified as either literate or not literate. The OECD noted that literacy had several components:

Current thinking about the desired outcomes of science education for all citizens emphasizes the development of a general understanding of important concepts and explanatory frameworks of science, of the methods by which science derives evidence to support claims for its knowledge, and of the strengths and limitations of science in the real world. It values the ability to apply this understanding to real situations involving science in which claims need to be assessed and decisions made...

Scientific literacy is the capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity. (OECD 2003:132–33)

The degree to which respondents demonstrate an understanding of basic scientific terms, concepts, and facts; an ability to comprehend how S&T generates and assesses evidence; and a capacity to distinguish science from pseudoscience are widely used indicators of basic scientific literacy.

The 2012 GSS continues to show that many Americans provide multiple incorrect answers to basic questions about scientific facts and do not apply appropriate reasoning strategies to questions about selected scientific issues. Residents of other countries, including highly developed ones, appear to perform no better, on balance, when asked similar questions.

Understanding Scientific Terms and Concepts

U.S. Patterns and Trends

A primary indicator of public understanding of science in the United States comes from a nine-question index of factual knowledge questions included in the GSS. In 2012, Americans were able to correctly answer an average of 5.8 of the 9 items (65%), which is slightly up from 2010 (5.6 of 9 items, or 63%) (appendix table 7-8).

The public’s level of factual knowledge about science has not changed much over the past two decades (figure 7-6). Since 2001, the average number of correct answers to a series of nine questions for which fully comparable data have been collected has ranged from 5.6 to 5.8 correct responses, although scores for individual questions have varied somewhat over time (appendix tables 7-8 and 7-9). Pew Research used several of the same questions in a 2013 survey and received nearly identical results (Pew Research Center 2013a).

Factual knowledge of science is strongly related to people’s level of formal schooling and the number of science and mathematics courses completed. For example, those who had not completed high school answered 45% of the nine questions correctly, and those who had completed a bachelor’s degree answered 78% of the questions correctly. The average percentage correct rose to 83% among those who had taken three or more science and mathematics courses in college (figure 7-7). Respondents aged 65 or older are less likely than younger Americans to answer the factual science questions correctly (appendix table 7-8). Younger generations have had more formal education, on average, than Americans coming into adulthood some 50 years ago; these long-term societal changes make it difficult to know whether the association between age and factual knowledge is due primarily to aging processes, cohort differences in education, or other factors. Analyses of surveys conducted between 1979 and 2006 concluded that public understanding of science has increased over time and by generation, even after controlling for formal education levels (Losh 2010, 2012).

Factual knowledge about science is also associated with sex of the respondent. On average, men tend to answer more factual science knowledge questions correctly (70% correct) than do women (60% correct) (figure 7-7). However, this pattern depends on the science domain referenced in the question. Men typically score higher than women on questions in the physical sciences but not on questions in the biological sciences. Women tend to score at least equally as high as men on the biological science questions and often a bit higher (table 7-7; appendix table 7-10).

Evolution and the Big Bang

The GSS survey includes two additional true-or-false science questions that are not included in the index calculation because Americans’ responses appear to reflect factors beyond unfamiliarity with basic elements of science. One of these questions addresses evolution, and the other addresses the origins of the universe. To better understand Americans’ responses, the 2012 GSS replicated an experiment first conducted in 2004 (NSB 2006). Half of the survey respondents were randomly assigned to receive questions focused on information about the natural world (“human beings, as we know them today, developed from earlier species of animals” and “the universe began with a big explosion”). The other half were asked the questions with a preface that focused on conclusions that the scientific community has drawn about the natural world (“according to the theory of evolution, human beings, as we know them today, developed from earlier species of animals” and “according to astronomers, the universe began with a big explosion”).

In 2012, respondents were much more likely to answer both questions correctly if the questions were framed as being about scientific theories or ideas rather than about natural world facts. For evolution, 48% of Americans answered “true” when presented with the statement that human beings evolved from earlier species with no preface, whereas 72% of those who received the preface said “true,” a 24 percentage point difference.[14] These results replicate the pattern from 2004, when the percentage answering “true” went from 42% to 74%, a 32 percentage point difference (NSB 2008). For the big bang question, the pattern was very similar: in 2012, 39% of Americans answered “true” when presented with the statement about the origin of the universe without the preface, whereas 60% of those who heard the statement with the preface answered “true.” This represents a 21 percentage point difference. The 2004 experiment found that including the preface increased the percentage who answered correctly from 33% to 62%, a 29 percentage point difference (NSB 2008). Residents of other countries have been more likely than Americans to answer “true” to the evolution question.[15]

International Comparisons

Researchers in a range of countries have asked adults in their countries identical or substantially similar questions to test their factual knowledge of science in past years. Knowledge scores for individual items vary from country to country, and no country consistently outperforms the others. For the physical science and biological science questions, knowledge scores are relatively low in China, Russia, and Malaysia. Compared with scores in the United States and the EU overall, scores in Japan are also relatively low for several questions (table 7-8).[16]

Science knowledge scores have also varied across Europe, with northern European countries, led by Sweden, scoring the highest on a set of 13 questions. For a smaller set of four questions, administered in 12 European countries in 1992 and 2005, each country performed better in 2005. In contrast, U.S. data on science knowledge did not show upward trends over the same period. In Europe, as in the United States, men, younger adults, and more highly educated people tend to score higher on these questions (NSB 2008).

The 2011 BBVA Foundation survey of 10 European countries and the United States included a set of 22 knowledge questions that were mostly different from those that have traditionally been included in Indicators. On average, the United States—with a mean score of 14.3 correct answers—performed similarly to many of the European countries surveyed, with a score close to the European average (13.4). The highest scoring countries were Denmark (15.6) and the Netherlands (15.3). Germany (14.8), the Czech Republic (14.6), Austria (14.2), the United Kingdom (14.1), and France (13.8) all had scores similar to those of the United States.

There were some questions on which Europeans, however, did much better than Americans. For example, for the statement, “the earliest humans lived at the same time as the dinosaurs,” about 43% of Americans correctly answered “false,” whereas 61% of Europeans in the 10 countries surveyed gave the correct response. Another question on which Americans did substantially worse focused on nuclear energy. About 47% of Americans correctly indicated that the “greenhouse effect” is not caused by the use of nuclear energy, in comparison to 58% of Europeans. Conversely, there were several questions on which Americans did substantially better (BBVA Foundation 2012a).[17]

Little international polling is done on the question of evolution or the big bang. However, residents of other countries have typically been more likely than Americans to say they believe that “human beings, as we know them today, developed from an earlier species of animals.” For example, 70% of European respondents in 2005 (NSB 2006) and 76% of Japanese respondents in 2011 (NISTEP 2012) gave this response.

Reasoning and Understanding the Scientific Process

U.S. Patterns and Trends

Another indicator of public understanding of science focuses on understanding of how science generates and assesses evidence, rather than knowledge of particular facts. Such measures reflect recognition that knowledge of specific S&T facts is conceptually different from knowledge about the overall scientific processes (Miller 1998).

Data on three general topics—probability, experimental design, and the scientific method—show trends in Americans’ understanding of the process of scientific inquiry. One set of questions tests how well respondents apply the principles of probabilistic reasoning to a series of questions about a couple whose children have a 1 in 4 chance of suffering from an inherited disease. A second set of questions deals with the logic of experimental design, asking respondents about the best way to design a test of a new drug for high blood pressure. A third, open-ended question probes what respondents think it means to “study something scientifically.” Because probability, experimental design, and the scientific method are all central to scientific research, these questions are relevant to how respondents evaluate scientific evidence. These measures are reviewed separately and then as a combined indicator of public understanding about scientific inquiry.

With regard to probability, 82% of Americans in 2012 correctly indicated that the fact that a couple’s first child has the illness has no relationship to whether three future children will have the illness. About 72% of Americans correctly responded that the odds of a genetic illness are equal for all of a couple’s children. Overall, 65% got both probability questions correct. Understanding of probability has been fairly stable over time, with the percentage giving both correct responses ranging from 64% to 69% since 1999 and going no lower than 61% dating back to 1990 (table 7-9; appendix tables 7-11 and 7-12).[18]

With regard to understanding experiments, one-third (34%) of Americans were able to answer a question about how to test a drug and then provide a correct response to an open-ended question that required them to explain the rationale for an experimental design (i.e., giving 500 people a drug while not giving the drug to 500 additional people as a control group). A smaller percentage of people were able to answer this set of questions in 2012 than were in 2010, when 51% answered correctly (table 7-9). However, this change should be treated with particular caution because of the way these types of survey responses rely on human coders to categorize responses and because the 2010 figure represents an historical high.[19]

The percentage of people the 2012 GSS judged as understanding what it means to study something scientifically was more consistent with previous surveys. About 20% of Americans were scored as correctly answering the GSS question on this topic. When describing the scientific method, these respondents mentioned that it involves at least one of the following: testing a theory using hypotheses, conducting an experiment with a control group, or making rigorous and systematic comparisons. The percentage of Americans providing at least one of these acceptable answers has declined somewhat from a high of 26% in 2001, although the 2012 result is similar to percentages in recent years.

Overall, when these questions are combined into an overall measure of “understanding of scientific inquiry,” the 2012 results are relatively low compared with those from other years. About 33% of Americans could both correctly respond to the two questions about probability and provide a correct response to at least one of the open-ended questions about experimental design or what it means to study something scientifically. The 2010 survey represents a high point (42%), and the current result is closest to scores seen in the late 1990s but lower than scores in the other surveys conducted since 2001 (table 7-9; appendix table 7-11). In general, respondents with more education did better on the scientific inquiry questions (figure 7-8; appendix table 7-12).

International Comparisons

The 2011 BBVA Foundation survey of 10 European countries and the United States included the standard question about probability in the context of genetic disease. In this instance, 61% of Americans could correctly indicate that a child’s susceptibility to a genetic disease was unaffected by whether the child’s siblings suffered from the disease. This percentage is substantially lower than the 82% found in the 2012 GSS (see previous section). The 10-country European average was 49%, but residents of both Denmark (81%) and the Netherlands (79%) did better on this question than Americans. UK residents (60%) had a score nearly identical to that of U.S. residents (BBVA Foundation 2012a).

Recent surveys from Asia also touch on reasoning and understanding. A 2010 Chinese survey reported that 49% understood the idea of probability, 20% understood the need for comparisons in research, and 31% understood the idea of “scientific research” (CRISP 2010). The exact wording of the questions used was not available, but given that much of the survey replicated past U.S. questions reported in Science and Engineering Indicators, it seems likely that these questions were similar to those asked in the United States. In a July 2011 Japanese survey, 62% correctly answered a multiple choice question about the use of control groups in research experiments, whereas 57% answered correctly in a follow-up December 2011 survey (NISTEP 2012). A Korean survey used self-report measures of knowledge. Koreans were most likely to say they knew “well” or “very well” about diseases (54%) and least likely to say they knew about nanotechnology (14%). Koreans were also unlikely to say they knew about stem cell research (15%) and genetic modification (20%) (KOFAC 2011).

Comparisons of Adult and K–12 Student Understanding

The 2008 GSS included several additional questions on the scientific process that also indicated that many Americans lack an understanding of experimental design.[20] Between 29% and 57% of Americans responded correctly to various questions measuring the concepts of scientific experiment and controlling variables. Only 12% of Americans responded correctly to all the questions on this topic, and nearly 20% did not respond correctly to any of them (NSB 2010). These data raise further questions about how well Americans can reliably apply a generalized understanding of experimental design across different situations. Responses to these questions also allowed a comparison between adults’ understanding of experimentation and that of middle school students tested on the same questions. On the three experimental knowledge questions in which direct comparison is possible, adults’ scores were similar to a national sample of middle school students on one question but were lower on two others (NSB 2010).

Pseudoscience

Another indicator of public understanding about S&T comes from a measure focused on the public’s capacity to distinguish science from pseudoscience. Since 1979, surveys have asked Americans whether they view astrology as being scientific. In 2012, about half of Americans (55%) said astrology is “not at all scientific.” One-third (32%) said they thought astrology was “sort of scientific,” and 10% said it was “very scientific.” About 4% said they did not know. In comparison, in 2010, 62% of Americans said that astrology was not scientific, and this percentage has hovered between 55% (2012) and 66% (2004) since 1985. The only years when a smaller percentage of respondents said that astrology was not at all scientific were in 1979, when 50% gave this response, and in 1983, when 51% gave this response.

Respondents with more years of formal education and higher income were less likely to see astrology as scientific. For example, in 2012, 72% of those with graduate degrees indicated that astrology is “not at all scientific,” compared with 34% of those who did not graduate from high school. Between 2010 and 2012, responses to the astrology question changed more among Americans with less education and factual knowledge than among other Americans. For example, in 2010, 79% of those high in factual knowledge said astrology was “not at all scientific,” which was only 5% more than the 74% who gave this response in 2012. In contrast, 52% of those with the lowest factual knowledge said astrology was unscientific in 2010 compared with 35% in 2012, which is a 17% change.

Age was also related to perceptions of astrology. Younger respondents, in particular, were the least likely to regard astrology as unscientific, with 42% of the youngest age group (18–24) saying that astrology is “not at all scientific.” The largest change, however, occurred in the 35–44 age group. In 2010, 64% of respondents in this group said that astrology was not scientific, whereas 51% gave this response in 2012, which is a 13% change (appendix table 7-13).[21]

International Comparisons

A 2010 Chinese survey had multiple questions about superstition. It found that 80% of respondents did not believe in “fortune telling sticks,” 82% did not believe in face reading, 87% did not believe in dream interpretation, 92% did not believe in horoscopes, and 95% did not believe in “computer fortune telling” (CRISP 2010).

Perceived Knowledge about Causes and Solutions to Environmental Problems

U.S. Patterns and Trends

Along with actual knowledge, perceived knowledge may also affect individuals’ attitudes and behaviors (Ladwig et al. 2012; Griffin, Dunwoody, and Yang 2013). The 2010 GSS included two questions about how much Americans believed they personally knew about the causes of and solutions to environmental problems. These questions used a 5-point scale that went from “1” for “know nothing at all” to “5” for “know a great deal.” About 27% of Americans chose a “4” or “5” when asked to assess their knowledge of the causes of environmental problems, and 14% chose “4” or “5” to describe their knowledge of environmental solutions (figure 7-9; appendix tables 7-14 and 7-15).

International Comparisons

The 2010 International Social Survey Programme (ISSP) allows for international comparisons of perceived science knowledge. The 2010 ISSP asked questions in 31 countries, including the United States, about perceived knowledge bearing on environmental issues. The results show that residents of most other countries surveyed expressed more confidence than Americans about their knowledge of the causes of and solutions to environmental problems. The country with the highest percentage of survey takers choosing “4” or “5” on the 5-point scale for perceived knowledge of the causes of environmental problems was Norway (50%). The United States (27%) had a much lower percentage, although its percentage was similar to that of many other countries. Only Slovak Republic respondents reported less knowledge, on average, than U.S. respondents about causes of environmental problems. Residents of more than half of the countries surveyed gave responses that suggested they knew more. On the subject of environmental solutions, the top countries saw about one-third of residents saying they understood the solutions to environmental problems. The United States (14%) was among the countries with the lowest percentages of residents who said they understood the solutions to environmental problems. Only the Russians (13%) reported less knowledge, on average, than the Americans about environmental solutions. It is also noteworthy that no country’s citizens thought they knew more about solutions than causes but that the difference in mean scores for the two questions was almost always less than half a point on the 5-point scale used by the ISSP (figure 7-9; appendix tables 7-14 and 7-15).

Notes
[14] Survey items that test factual knowledge sometimes use easily comprehensible language at the cost of scientific precision. This may prompt some highly knowledgeable respondents to believe that the items blur or neglect important distinctions, and in a few cases may lead respondents to answer questions incorrectly. In addition, the items do not reflect the ways that established scientific knowledge evolves as scientists accumulate new evidence. Although the text of the factual knowledge questions may suggest a fixed body of knowledge, it is more accurate to see scientists as making continual, often subtle modifications in how they understand existing data in light of new evidence. When the answer to a factual knowledge question is categorized as “correct,” it means that the answer accords with the current consensus among knowledgeable scientists and that the weight of scientific evidence clearly supports the answer.
[15] Although the data clearly show a difference in how respondents answer to different question types, these data do not provide guidance as to what caused the difference. A range of explanations are possible.
[16] In its own international comparison of scientific literacy, Japan ranked itself 10th among the 14 countries it evaluated (NISTEP 2002).
[17] Twenty questions used a true-or-false format. These included: (1) “Hot air rises” (true; Europe correct: 91%, United States correct: 95%); (2) “The continents have been moving for millions of years and will continue to move in the future” (true; Europe correct: 86%, United States correct: 80%); (3) “The oxygen we breathe comes from plants” (true; Europe correct: 83%, United States correct: 94%); (4) “The gene is the basic unit of heredity of living beings” (true; Europe correct: 82%, United States correct: 82%); (5) “Earth’s gravity pulls objects towards it without being touched” (true; Europe correct: 79%, United States correct: 80%); (6) “Energy cannot be created or destroyed, but only changed from one form to another” (true; Europe correct: 66%, United States correct: 80%); (7) “Almost all microorganisms are harmful to human beings” (false; Europe correct: 63%, United States correct: 56%); (8) “Generally speaking, human cells do not divide” (false; Europe correct: 63%, United States correct: 58%); (9) “The earliest humans lived at the same time as the dinosaurs” (false; Europe correct: 61%, United States correct: 43%); (10); “Plants have no DNA” (false; Europe correct: 60%, United States correct: 64%); (11); “The greenhouse effect is caused by the use of nuclear power” (false; Europe correct: 58%, United States correct: 47%); (12) “All radioactivity is a product of human activity” (false; Europe correct: 56%, United States correct: 62%); (13) “Ordinary tomatoes, the ones we normally eat, do not have genes, whereas genetically engineered tomatoes do” (false; Europe correct: 54%, United States correct: 48%); (14) “It is the father’s gene that determines a newborn baby’s sex, whether it is a boy or a girl” (true; Europe correct: 52%, United States correct: 75%); (15) “Lasers work by sound waves” (false; Europe correct: 48%, United States correct: 54%); (16) “The light that reaches the Earth from the sun is made up of a single color: white” (false; Europe correct: 44%, United States correct: 55%); (17) “Today it is not possible to transfer genes from humans to animals” (false; Europe correct: 41%, United States correct: 43%); (18) “Atoms are smaller than electrons” (false; Europe correct: 38%, United States correct: 50%); (19) “Antibiotics destroy viruses” (false; Europe correct: 36%, United States correct: 47%); (20) “Human stem cells are extracted from human embryos without destroying the embryos” (false; Europe correct: 29%, United States correct: 54%). Two additional questions used a multiple choice format. These asked about (21) whether the sun moves around the Earth, whether the Earth moves around the sun (correct), or neither the sun nor the Earth moves (Europe correct: 80%, United States correct: 82%); and (22) whether light travels faster than sound (correct), sound travels faster than light, or whether they travel at equal speed (Europe correct: 74%, United States correct: 78%).
[18] Earlier NSF surveys used for the Indicators report used additional questions to measure understanding of probability. Bann and Schwerin (2004) identified a smaller number of questions that could be administered to develop a comparable indicator. Starting in 2004, the NSF surveys used these questions for the trend factual knowledge scale.
[19] The evidence for the 2012 decline in understanding of experimental design needs to be regarded with caution. It is important to note that the percentage of Americans who correctly answered the initial, multiple choice question about how to conduct a pharmaceutical trial stayed stable between 2010 and 2012. It was only the follow-up question that asked respondents to use their own words to justify the to use of a control group that saw a decline. For this question, interviewers recorded the response and then trained coders to use a standard set of rules to judge whether the response is correct. Although the instructions and training have remained the same in different years, small changes in survey administration practices can sometimes substantially affect such estimates.
[20] The questions were selected from the Trends in Mathematics and Science Studies, National Assessment of Educational Progress, practice General Educational Development exams, and the American Association for the Advancement of Science Project 2061.
[21] The pseudoscience section focuses on astrology because of the availability of long-term national trend indicators on this subject. Other examples of pseudoscience include the belief in lucky numbers, the existence of unidentified flying objects (UFOs), extrasensory perception (ESP), or magnetic therapy. One difficulty with this question is that astrology is based on observation of planets and stars and respondents might believe that this makes it “sort of scientific.” However, the fact that those with more formal education and higher factual knowledge scores are consistently more likely to reject astrology as a science suggests that this nuance has a limited impact on results.
Close