Skip all navigation and go to page content

Chapter 7. Science and Technology: Public Attitudes and Understanding

Public Knowledge About S&T


Scientific literacy can be relevant to the public policy and personal choices that people make. In developing measures for scientific literacy across nations, the Organisation for Economic Co-operation and Development (OECD) (2003) noted that literacy had several components:

Current thinking about the desired outcomes of science education for all citizens emphasizes the development of a general understanding of important concepts and explanatory frameworks of science, of the methods by which science derives evidence to support claims for its knowledge, and of the strengths and limitations of science in the real world. It values the ability to apply this understanding to real situations involving science in which claims need to be assessed and decisions made…

Scientific literacy is the capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity. (pp. 132–33)

As the reference to changes made through human activity makes clear, the OECD definition implies an understanding of technology. The OECD takes the view that literacy is a matter of degree and that people cannot simply be classified as either literate or not literate.

A good understanding of basic scientific terms, concepts, and facts; an ability to comprehend how science generates and assesses evidence; and a capacity to distinguish science from pseudoscience are widely used indicators of scientific literacy. (For a different perspective on scientific literacy, see sidebar, "Asset-Based Models of Knowledge.")

U.S. survey data indicate that many Americans cannot provide correct answers to basic questions about scientific facts and do not apply appropriate reasoning strategies to questions about selected scientific issues. Residents of other countries, including highly developed ones, perform no better, on balance, when asked similar questions. However, compared to middle-school students, American adults perform relatively well. In light of the limitations of using a small number of questions largely keyed to knowledge taught in school, generalizations about American's knowledge of science should be made cautiously.


Understanding Scientific Terms and Concepts

U.S. Patterns and Trends
U.S. data show that the public's level of factual knowledge about science has not changed much over time. Figure 7-8 shows average numbers of correct answers to a series of mostly true-false science questions in different years for which fully comparable data were collected (appendix table 7-8 ).[10] Although performance on individual questions varies somewhat over time (appendix table 7-9 ), overall scores are relatively similar.

Factual knowledge of science is positively related to people's level of formal schooling, income level, and the number of science and math courses they have taken. Factual knowledge is also positively related to scores on a 10-item vocabulary test included in the GSS, which scholars in many disciplines have often used to assess verbal skills (Malhotra, Krosnick, and Haertel 2007).[11] In the factual questions included in NSF surveys since 1979, which allow for the observation of trends over time (referred to as "trend factual questions" below), men score higher on the questions in the physical sciences and women score higher on those in the biological sciences (table 7-4 ).[12]

Respondents 65 and older are less likely than others to answer the questions correctly (appendix table 7-8 and 7-10 ). An analysis of surveys conducted between 1979 and 2006 concluded that generational experiences are more important than cognitive declines associated with aging in explaining these differences (Losh 2009, 2010).

The factual knowledge questions that have been repeatedly asked in U.S. surveys involve information that was being taught in grades K–12 when most respondents were young. Because science continually generates new knowledge that reshapes how people understand the world, scientific literacy requires lifelong learning so that citizens become familiar with terms, concepts, and facts that emerged after they completed their schooling.

In 2008, the GSS asked Americans questions that tested their knowledge of a topic that has not been central to the standardized content of American science education: nanotechnology. Survey respondents who scored relatively well overall on the questions that were asked repeatedly over the years also exhibited greater knowledge of this topic (figure 7-9 ).[13] Likewise, the educational and demographic characteristics associated with higher scores on the trend factual knowledge questions are also associated with higher scores for this new topic (appendix table 7-11 ). These data suggest that the knowledge items used to measure trends, although focused on the kind of factual knowledge learned in school, are a reasonable indicator of factual science knowledge in general, including knowledge that is acquired later in life.

Similarly, national standards for what students should know reflect new science concepts beyond those covered by the long-standing questions that measure trends in public knowledge of science. In 2008, the GSS included questions on science and mathematics knowledge that were more closely aligned with national standards for what students should know. The questions were selected from three national exams administered to students and Project 2061, an initiative by the American Association for the Advancement of Science (AAAS) that develops assessment materials aligned with current curricular standards.[14] This battery of questions included nine factual questions, two questions measuring chart reading and understanding of the statistical concept of "mean," and five questions that tested reasoning and understanding of the scientific process. Two out of the 16 questions were open-ended and the rest were multiple-choice (see sidebar, "New Science Knowledge Questions").[15]

The results show that survey respondents who answered the additional factual knowledge questions correctly also tended to provide correct answers to the trend factual knowledge questions (figure 7-10 ; appendix table 7-10 and 7-12 ). This suggests again that the trend factual questions are a reasonable indicator of the type of knowledge students are tested on in national assessments.

Out of seven factual science knowledge questions where comparison scores with fourth and eighth grade students were possible, adult Americans received a higher or similar score in five of them (table 7-5 ). Comparisons should be made cautiously because of the differences in circumstances in which students and adults responded to these science knowledge questions. Students' tests were on paper and self-administered, whereas the majority of respondents in the GSS answered orally to an interviewer. Elementary and middle school students had an advantage over adults in that classroom preparation preceded their tests.

The variation patterns on these items were similar to the trend factual questions. However, men scored higher than women in all but one of the additional factual knowledge questions included in the 2008 GSS (appendix table 7-10 and 7-12 ).

International Comparisons
Adults in different countries and regions have been asked identical or substantially similar questions to test their factual knowledge of science. (For an examination of how question wording is related to international differences in knowledge measures, see sidebar, "Knowledge Difference or Measurement Error?") Knowledge scores for individual items vary from country to country, and no country consistently outperforms the others (figure 7-11 ). For the questions reported in figure 7-11, knowledge scores are relatively low in China, Russia, and Malaysia. Compared with the United States and the highly developed countries in Europe, Japanese scores are also relatively low.[16]

Science knowledge scores vary considerably across the EU-25 countries, with northern European countries, led by Sweden, recording the highest total scores on a set of 13 questions. For a smaller set of 4 items that were administered in both 1992 and 2005 in 12 European countries, each country performed better in 2005. In contrast, the U.S. data on science knowledge do not show upward trends over the same period. In Europe, as in the United States, men, younger adults, and more highly educated people tend to score higher on these questions. (For more details on scientific literacy in individual countries in Europe, see NSB 2008.)


Reasoning and Understanding the Scientific Process

Past NSF surveys have used questions on three general topics—probability, experimental design, and the scientific method—to assess trends in Americans' understanding of the process of scientific inquiry. One set of questions tests how well respondents apply principles of probabilistic reasoning to a series of questions about a couple whose children have a one-in-four chance of suffering from an inherited disease.[17] A second set of questions deals with the logic of experimental design, asking respondents about the best way to design a test of a new drug for high blood pressure. An open-ended question probes what respondents think it means to "study something scientifically." Because probability, experimental design, and the scientific method are all central to scientific research, these questions are relevant to how respondents evaluate scientific evidence.

In 2008, 65% of Americans responded correctly to the two questions about probability, 38% to the questions testing the concept of experiment, and 22% to the questions testing the concept of scientific study. Scores on the probability questions fluctuate each year but are relatively stable over time; however, between 2006 and 2008 the combined scores of the two probability questions slightly declined. Scores in the other scientific process questions were generally higher than they were in the mid-1990s, but decreased somewhat in 2008 (appendix table 7-13 ). Performance on these questions is strongly associated with the different measures of science knowledge and education (appendix table 7-14 ). Older Americans and those with lower incomes, two groups that tend to have less education in the sciences, also score lower on the inquiry measures. Men and women obtain similar scores on these questions (table 7-4 and 7-6 ).

The 2008 GSS included several additional questions on the scientific process that provide an opportunity to examine Americans' understanding of experimental design in more detail and benchmark their scores to national results of middle school students. From 29% to 57% of Americans responded correctly to questions measuring the concepts of scientific experiment and controlling variables (appendix table 7-13 and 7-15 ). However, only 12% of Americans responded correctly to all the questions on this topic and nearly 20% of Americans did not respond correctly to any of them (figure 7-12 ).[18] These data suggest that relatively few Americans have a generalized understanding of experimental design that they can reliably apply to different situations.

The proportion of Americans with a strong grasp of experimental design does not vary by sex. However, Americans who answered at least three of four experimental knowledge questions correctly were more likely to have a college education or higher, have taken more courses in math and science, and have a clear understanding of the scientific method. They are also more likely to be in the top income bracket and to respond correctly to factual science knowledge and probability questions.

Adults' scores in the experimental knowledge questions are similar to middle school students in one question (question 2 in table 7-7 ) but lower in two others, out of the three questions where the comparison was possible.


Understanding of Statistics and Charts

Americans encounter basic statistics and charts in everyday life. Many media reports cite studies in health, social, economic, and political trends. Understanding statistical concepts is important to understanding the meaning of these studies and consequently to scientific literacy (Crettaz von Roten 2006). The results from the 2008 GSS show that 77% of Americans can read a simple chart correctly and 66% understand the concept of "mean" in statistics. Understanding these two concepts is associated with formal education, the number of math and science courses taken, income, and verbal ability. Older respondents were less likely to respond correctly to these two questions (appendix table 7-15 ).


Pseudoscience

The results of 13 NSF-funded surveys conducted between 1979 and 2008 show a trend toward fewer Americans seeing astrology as scientific. In the 2008 GSS 63% of Americans indicated they believed that astrology was "not at all scientific" and 28% said that it was only "sort of scientific." Respondents with more years of formal education were less likely to perceive astrology to be at all scientific. In 2008, 78% of college graduates indicated that astrology was "not at all scientific," compared with 60% of high school graduates. Those who scored highest on the factual knowledge measures were less likely to perceive astrology to be at all scientific (78%) than those who scored lowest (45%). Respondents who correctly understood the concept of scientific inquiry were more likely to say that astrology was not at all scientific (74%) than those who did not understand the concept (57%). However, the youngest age group (18–24) was less likely to say astrology is "not at all scientific" (49%) and more likely to say it was "sort of scientific" (44%) (appendix table 7-16 ).[19]

Notes

[10] Survey items that test factual knowledge sometimes use easily comprehensible language at the cost of scientific precision. This may prompt some highly knowledgeable respondents to feel that the items blur or neglect important distinctions, and in a few cases may lead respondents to answer questions incorrectly. In addition, the items do not reflect the ways that established scientific knowledge evolves as scientists accumulate new evidence. Although the text of the factual knowledge questions may suggest a fixed body of knowledge, it is more accurate to see scientists as making continual, often subtle, modifications in how they understand existing data in light of new evidence.
[11] Formal schooling and verbal skills are positively associated. In the 2008 GSS data, verbal skills contributed to factual knowledge even when controlling for education.
[12] Among respondents with comparable formal education, attending informal science institutions was associated with greater knowledge.
[13] The two nanotechnology questions were asked only of respondents who said they had some familiarity with nanotechnology, and a sizable majority of the respondents who ventured a response different from "don't know" answered the questions correctly. To measure nanotechnology knowledge more reliably, researchers would prefer a scale with more than two questions.
[14] The questions were selected from the Trends in Mathematics and Science Studies (TIMSS), National Assessment of Educational Progress (NAEP), practice General Educational Development (GED) exams, and AAAS Project 2061.

[15] The scoring of the open-ended questions closely followed the scoring of the corresponding test administered to middle-school students.
For the NAEP question "Lightning and thunder happen at the same time, but you see the lightning before you hear the thunder. Explain why this is so," the question was scored as follows:

  • Complete: The response provided a correct explanation including the relative speeds at which light and sound travel. For example, "Sound travels much slower than light so you see the light sooner at a distance."
  • Partial: The response addressed speed and used terminology such as thunder for sound and lightning for light, or made a general statement about speed but did not indicate which is faster. For example, "One goes at the speed of light and the other at the speed of sound."
  • Unsatisfactory/Incorrect: Any response that did not relate or mention the faster speed of light or its equivalent, the slower speed of sound. For example: "Because the storm was further out." or "Because of static electricity."

For the TIMSS question "A solution of hydrochloric acid (HCl) in water will turn blue litmus paper red. A solution of the base sodium hydroxide (NaOH) in water will turn red litmus paper blue. If the acid and base solutions are mixed in the right proportion, the resulting solution will cause neither red nor blue litmus paper to change color. Explain why the litmus paper does not change color in the mixed solution," the question was scored as follows:

  • Correct: The response had to refer to a neutralization or a chemical reaction that results in products that do not react with litmus paper. Three kinds of answers were classified as correct:
    • The response referred explicitly to the formation of water (and salt) from the neutralization reaction (e.g., "Hydrochloric acid and sodium hydroxide will mix together to form water and salt, which is neutral.")
    • The response referred to neutralization (or the equivalent) even if the specific reaction is not mentioned (e.g., "The mixed solution is neutral, so litmus paper does not react.")
    • The response referred to a chemical reaction taking place (implicitly or explicitly) to form products that do not react with litmus paper (or a similar substance), even if neutralization was not explicitly mentioned (e.g., "The acid and base react, and the new chemicals do not react with litmus paper.")
  • Partially correct: The response mentioned only that acids and bases are "balanced," "opposites," "cancel each other out," or only that it changes to a salt without mentioning the neutralization reaction. These answers suggest that the respondent remembered the concept but the terminology they used was less precise, or that the answer was partial. For example, "they balance each other out."
  • Incorrect: The response did not mention any of the above or is too partial or incomplete, and/or uses terminology that is too imprecise. For example, "Because they are base solutions—the two bases mixed together there is no reaction." or "There is no change. Both colors change to the other."
[16] In its own international comparison of scientific literacy, Japan ranked itself 10th among the 14 countries it evaluated (National Institute of Science and Technology Policy 2002).
[17] Early NSF surveys used additional questions to measure understanding of probability. Through a process similar to that described in note 12, Bann and Schwerin (2004) identified a smaller number of questions that could be administered to develop a comparable indicator. These questions were administered in 2004 and 2006, and appendix tables 7-13 and 7-14 record combined probability responses using these questions; appendix table 7-13 also shows responses to individual probability questions in each year.
[18] Figure 7-12 includes four questions on experimental design included in appendix tables 7-13 and 7-15. Two of these four questions include a question on experimental design and a follow-up question asking "why"; correct responses to these two questions represent the combined responses and are incorporated into the figure as one question.
[19] The pseudoscience section focuses on astrology because of the availability of long-term national trend indicators on this subject. Other examples of pseudoscience include the belief in lucky numbers, the existence of unidentified flying objects (UFOs), extrasensory perception (ESP), or magnetic therapy.
 

Science and Engineering Indicators 2010   Arlington, VA (NSB 10-01) | January 2010

Close