Consciousness: Pushing the boundaries of science and pseudoscience

I attended a conference in La Jolla last week with the ambitious title, “The Science of Consciousness”. As it brought together neuroscientists, psychologists, biologists, physicists (like Roger Penrose), mathematicians, linguists (like Noam Chomsky), and many others, I looked forward to a variety of perspectives, including those outside the mainstream, but I got more than I bargained for. It turns out that it also included people more involved in various kinds of spirituality, wellness, meditation, and…interesting artistic interpretations.

(Image by Robert Fludd, 1619, Wikimedia Commons.)

Instead of shedding light on something as perplexing and seemingly impenetrable as consciousness, which people have been trying to understand for millennia, these other approaches threaten to undermine the whole enterprise. I worry that some of the conference could be better characterized as “The Pseudoscience of Consciousness.” And the distinction between science and pseudoscience never seemed more blurred.

But what do I know. Science hasn’t really given us that much of an understanding of the murky concept. What is consciousness and do only humans have it? What about babies and the elderly and people with debilitating mental illnesses? Exactly what parts of the brain are involved (just the frontal cortex? microtubules in neurons everywhere?) and how did it appear in evolution?

Science is only getting us so far, and consciousness is a fundamental conundrum of the human condition, so why not consider other avenues toward probing it? But some people aren’t doing that argument any favors. There’s people like Deepak Chopra (who was at the conference) who add the word “quantum” to their speculative if not fanciful ideas to try to make them profound or something. That’s B.S. (And anyway, the interpretation of the quantum behavior of particles and waves remains disputed and poorly understood since their discovery some 90 years ago, so that’s not the best reference to make!)

I’m glad people continue to speculate and investigate different facets of consciousness, such as how we’re conscious about our perceptions of language and conversation, music, making and retrieving memories, etc. Some scientists are also studying the kinds of neuronal activity that are dampened by anesthetics and enhanced by psychoactive drugs, which sounds weird, but it might illuminate, just a bit, what’s going on in our parts of our complex brains.

I’m also glad that people aren’t limiting this endeavor science. After all, poets, philosophers, musicians can make insights no one else has thought of before, and we need to listen to them. But when there’s the risk of pseudoscience being passed off as science and gaining legitimacy at the expense of it, then we have a problem.

Maybe this sort of thing is inevitable when you’re pushing the frontiers of something unknown while answers remain illusive. For example, think of interstellar space exploration, which also naturally captivates the imagination of a wide range of people. At times the consciousness conference reminds me of parts of the “Finding Earth 2.0” conference organized by 100-Year Starship that I went to back in 2015. While some impressive people like Jill Tarter and Mae Jameson focused on space travel technology and the search for extraterrestrial intelligence (SETI), other people worked on things like “astrosociology.” I was expecting people to talk about what it might be like for a handful of people to be stuck in an enclosed spaceship for years or a slightly larger planetary colony for decades. Those are important and tractable questions—and scientists at NASA and elsewhere are studying them right now. But instead a handful of people spoke about giant ships at least a century in the future, like it was Battlestar Galactica or the starship Enterprise or something. Yes, let’s think about what things might be like in the 23rd century, but all that’s premature unless we figure out how to get there first.

Reproducibility in Science: Study Finds Psychology Experiments Fail Replication Test

Scientists toiling away in their laboratories, observatories and offices don’t just fabricate data, plagiarize other research, or make up questionable conclusions when publishing their work. Participating in any of these dishonest activities would be like violating a scientific Hippocratic oath. So why do many scientific studies and papers turn out to be unreliable or flawed?

(Credit: Shutterstock/Lightspring)

(Credit: Shutterstock/Lightspring)

In a massive analysis of 100 recently published psychology papers with different research designs and authors, University of Virginia psychologist Brian Nosek and his colleagues find that more than half of them fail replication tests. Only 39% of the psychology experiments could be replicated unambiguously, while those claiming surprising effects or effects that were challenging to replicate were less reproducible. They published their results in the new issue of Science.

Nosek began crowdsourcing the Reproducibility Project in 2012, when he reached out to nearly 300 members of the psychology community. Scientists lead and work on many projects simultaneously for which they receive credit when publishing their own papers, so it takes some sacrifice to take part: the replication paper lists the authors of the Open Science Collaboration alphabetically, rather than in order of their contributions to it, and working with so many people presents logistical difficulties. Nevertheless, considering the importance of scientific integrity and investigations of the reliability of analyses and results, such an undertaking is worthwhile to the community. (In the past, I have participated in similarly large collaboration projects such as this, which I too believe have benefited the astrophysical community.)

The researchers evaluated five complementary indicators of reproducibility using significance and p-values, effect sizes, subjective assessments of replication teams and meta-analyses of effect sizes. Although a failure to reproduce does not necessarily mean that the original report was incorrect, they state that such “replications suggest that more investigation is needed to establish the validity of the original findings.” This is diplomatic scientist-speak for: “people have reason to doubt the results.” In the end, the scientists in this study find that in the majority of cases, the p-values are higher (making the results less significant or statistically insignificant) and the effect size is smaller or even goes in the opposite direction of the claimed trend!

Effects claimed in the majority of studies cannot be reproduced. Figure shows density plots of original and replication p-values and effect sizes (correlation coefficients).

Effects claimed in the majority of studies cannot be reproduced. Figure shows density plots of original and replication p-values and effect sizes (correlation coefficients).

Note that this meta-analysis has a few limitations and shortcomings. Some studies or analysis methods that are difficult to replicate involve research that may be pushing the limits or testing very new or little studied questions, and if scientists only asked easy questions or questions to which they already knew the answer, then the research would not be particularly useful to the advancement of science. In addition, I could find no comment in the paper about situations in which the scientists face the prospect of replicating their own or competitors’ previous papers; presumably they avoided potential conflicts of interest.

These contentious conclusions could shake up the social sciences and subject more papers and experiments to scrutiny. This isn’t necessarily a bad thing; according to Oxford psychologist Dorothy Bishop in the Guardian, it could be “the starting point for the revitalization and improvement of science.”

In any case, scientists must acknowledge the publication of so many questionable results. Since scientists generally strive for honesty, integrity and transparency, and cases of outright fraud are extremely rare, we must investigate the causes of these problems. As pointed out by Ed Yong in the Atlantic, like many sciences, “psychology suffers from publication bias, where journals tend to only publish positive results (that is, those that confirm the researchers’ hypothesis), and negative results are left to linger in file drawers.” In addition, some social scientists have published what first appear to be startling discoveries but turn out to be cases of “p-hacking…attempts to torture positive results out of ambiguous data.”

Unfortunately, this could also provide more fuel for critics of science, who already seem to have enough ammunition judging by overblown headlines pointing to increasing numbers of scientists retracting papers, often due to misconduct, such as plagiarism and image manipulation. In spite of this trend, as Christie Aschwanden argues in a FiveThirtyEight piece, science isn’t broken! Scientists should be cautious about unreliable statistical tools though, and p-values fall into that category. The psychology paper meta-analysis shows that p<0.05 tests are too easy to pass, but scientists knew that already, as the Basic and Applied Social Psychology journal banned p-values earlier this year.

Furthermore, larger trends may be driving the publication of such problematic science papers. Increasing competition between scientists for high-status jobs, federal grants, and speaking opportunities at high-profile conferences pressure scientists to publish more and to publish provocative results in major journals. To quote the Open Science Collaboration’s paper, “the incentives for individual scientists prioritize novelty over replication.” Furthermore, overextended peer reviewers and editors often lack the time to properly vet and examine submitted manuscripts, making it more likely that problematic papers might slip through and carry much more weight upon publication. At that point, it can take a while to refute an influential published paper or reduce its impact on the field.

Source: American Society for Microbiology, Nature

Source: American Society for Microbiology, Nature

When I worked as an astrophysics researcher, I carefully reviewed numerous papers for many different journals and considered that work an important part of my job. Perhaps utilizing multiple reviewers per manuscript and paying reviewers for their time may improve that situation. In any case, most scientists recognize that though peer review plays an important role in the process, it is no panacea.

I know that I am proud of all of my research papers, but at times I wished to have more time for additional or more comprehensive analysis in order to be more thorough and certain about some results. This can be prohibitively time-consuming for any scientist—theorists, observers and experimentalists alike—but scientists draw a line at different places when deciding whether or when to publish research. I also feel that sometimes I have been too conservative in the presentation of my conclusions, while some scientists make claims that go far beyond the limited implications of uncertain results.

Some scientists jump on opportunities to publish the most provocative results they can find, and science journalists and editors love a great headline, but we should express skepticism when people announce unconvincing or improbable findings, as many of them turn out to be wrong. (Remember when Opera physicists thought that neutrinos could travel faster than light?)

When conducting research and writing and reviewing papers, scientists should aim for as much transparency and openness as possible. The Open Science Framework demonstrates how such research could be done, where the data are accessible to everyone and individual scientist’s contributions can be tracked. With such a “GitHub-like version control system, it’s clear exactly who takes responsibility for what part of a research project, and when—helping resolve problems of ownership and first publication,” writes Katie Palmer in Wired. As Marcia McNutt, editor in chief of Science, says, “authors and journal editors should be wary of publishing marginally significant results, as those are the ones that are less likely to reproduce.”

If some newly published paper is going to attract the attention of the scientific community and news media, then it must be sufficiently interesting, novel or even contentious, so scientists and journalists must work harder to strike that balance. We should also remember that, for better or worse, science rarely yields clear answers; it usually leads to more questions.

Tussles in Brussels: How Einstein vs Bohr Shaped Modern Science Debates

In one corner, we have a German-born theoretical physicist famous for his discovery of the photoelectric effect and his groundbreaking research on relativity theory. In the opposite corner, hailing from Denmark, we have a theoretical physicist famous for his transformational work on quantum theory and atomic structure. Albert Einstein and Niels Bohr frequently butted heads over the interpretation of quantum mechanics and even over the scope and purpose of physics, and their debates still resonate today.

Niels Bohr and Albert Einstein (photo by Paul Ehrenfest, 1925).

Niels Bohr and Albert Einstein (photo by Paul Ehrenfest, 1925).

In a class on “Waves, Optics, and Modern Physics,” I am teaching my students fundamentals about quantum physics, and I try to incorporate some of this important history too. In the early 20th century, physicists gradually adopted new concepts such as discrete quantum energy states and wave-particle duality, in which under certain conditions light and matter exhibit both wave and particle behavior. Nevertheless, other quantum concepts proposed by Bohr and his colleagues, such as non-locality and a probabilistic view of the wave function, proved more controversial. These are not mere details, as more was at stake—whether one can retain scientific realism and determinism, as was the case with classical physics, if Bohr’s interpretation turns out to be correct.

Bohr had many younger followers trying to make names for themselves, including Werner Heisenberg, Max Born, Wolfgang Pauli, and others. As experimental physicists explored small-scale physics, new phenomena required explanations. One could argue that some of Bohr and his followers’ discoveries and controversial hypotheses were to some extent just developments of models that managed to fit the data, and the models needed a coherent theoretical framework to base them on. On the other hand, Einstein, Erwin Schrödinger, and Louis de Broglie were skeptical or critical about some of these proposals.

The debates between Einstein and Bohr came to a head as they clashed in Brussels in 1927 at the Fifth Solvay Conference and at the next conference three years later. It seems like all of the major physics figures of the day were present, including Einstein, Bohr, Born, Heisenberg, Pauli, Schrödinger, de Broglie, Max Planck, Marie Curie, Paul Dirac, and others. (Curie was the only woman there, as physics had an even bigger diversity problem back then. The nuclear physicist Lise Meitner came on the scene a couple years later.)

Conference participants, October 1927. Institut International de Physique Solvay, Brussels.

Conference participants, October 1927. Institut International de Physique Solvay, Brussels.

Einstein tried to argue, with limited success, that quantum mechanics is inconsistent. He also argued, with much more success in my opinion, that (Bohr’s interpretation of) quantum mechanics is incomplete. Ultimately, however, Bohr’s interpretation carried the day and became physicists’ “standard” view of quantum mechanics, in spite of later developments by David Bohm supporting Einstein’s realist interpretation.

Although the scientific process leads us in fruitful directions and encourages us to explore important questions, it does not take us directly and inevitably toward a unique “truth.” It’s a messy nonlinear process, and since scientists are humans too, the resolution of scientific debates can depend on historically contingent social and cultural factors. James T. Cushing (my favorite professor when I was an undergraduate) argued as much in his book, Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony.

Why do the Einstein vs Bohr debates still fascinate us—as well as historians, philosophers, and sociologists—today? People keep discussing and writing about them because these two brilliant and compelling characters confronted each other about issues with implications about the scope and purpose of physics and how we view the physical world. Furthermore, considering the historically contingent aspects of these developments, we should look at current scientific debates with a bit more skepticism or caution.

Implications for Today’s Scientific Debates

In recent years, we have witnessed many intriguing disagreements about important issues in physics and astrophysics and in many other fields of science. For example, in the 1990s and 2000s, scientists debated whether the motions, masses, and distributions of galaxies were consistent with the existence of dark matter particles or whether gravitational laws must be modified. Now cosmologists disagree about the likely nature of dark energy and about the implications of inflation for the multiverse and parallel universes. And string theory is a separate yet tenuously connected debate. On smaller scales, we have seen debates between astrobiologists about the likelihood of intelligent life on other planets, about whether to send missions to other planets, and even disagreements about the nature of planets, which came to the fore with Pluto‘s diminished status.

Scientists play major roles in each case and sometimes become public figures, including Stephen Hawking, Neil deGrasse Tyson, Roger Penrose, Brian Greene, Sean Carroll, Max Tegmark, Mike Brown, Carolyn Porco, and others. Moreover, many scientists are also science communicators and actively participate in social media, as conferences aren’t the only venues for debates anymore. For example, 14 of the top 50 science stars on Twitter are physicists or astronomers. Many scientists communicate their views to the public, and people want to hear them weigh in on important issues and on “what it all means.” (Contrary to an opinion expressed by deGrasse Tyson, physicists are philosophers too.)

In any case, as scientific debates unfold, we should keep in mind that sometimes we cannot find a unique elegant explanation to a phenomenon, or if such an explanation exists, it may remain beyond our grasp for a long time. Furthermore, we should keep our minds open to the possibility that our own interpretation of a scientific phenomenon could be incomplete, incoherent, or even incorrect.

Is “Data-driven Science” an Oxymoron?

In recent years, we’ve been repeatedly told that we’re living and working in an era of Big Data (and Big Science). We’ve heard how Nate Silver and others are revolutionizing how we analyze and interpret data. In many areas of science and in many aspects of life, for that matter, we’re obtaining collections of datasets so large and complex that it becomes necessary to change our traditional analysis methods. Since the volume, velocity, and variety of data are rapidly increasing, it is increasingly important to develop and apply appropriate techniques and statistical tools.

However, is it true that Big Data changes everything? Much can be gained from proper data analysis and from “data-driven science.” For example, the popular story about Billy Beane and Moneyball shows how Big Data and statistics transformed how baseball teams are assessed. But I’d like to point out some misconceptions and dangers of the concept of data-driven science.

Governments, corporations, and employers are already collecting (too?) much of our precious, precious data and expending massive effort to study it. We might worry about this because of concerns of privacy, but we should also worry about what might happen to analyses that are excessively focused on the data. There are questions that we should be asking more often: Who’s collecting the data? Which data and why? Which analysis tools and why? What are their assumptions and priors? My main point will be that the results from computer codes churning through massive datasets are not objective or impartial, and the data don’t inevitably drive us to a particular conclusion. This is why the concept of “data-driven” anything is misleading.

Let’s take a look at a few examples of data-driven analysis that have been in the news lately…

Nate Silver and FiveThirtyEight

Many media organizations are about something, and they use a variety of methods to study it. In a sense, FiveThirtyEight isn’t really about something. (If I wanted to read about nothing, I’d check out ClickHole and be more entertained.) Instead, FiveThirtyEight is about their method, which they call “data journalism” and by which they mean “statistical analysis, but also data visualization, computer programming and data-literate reporting.”

I’m exaggerating though. They cover broad topics related to politics, economics, science, life, and sports. They’ve had considerable success making probabilistic predictions about baseball, March Madness, and World Cup teams and in packaging statistics in a slick and easy-to-understand way. They also successfully predicted the 2012 US elections on a state-by-state basis, though they stuck to the usual script of treating it as a horse race: one team against another. Their statistical methods are sometimes “black boxes”, but if you look, they’ll often provide additional information about them. Their statistics are usually sound, but maybe they should be more forthcoming about the assumptions and uncertainties involved.

Their “life” section basically allows them to cover whatever they think is the popular meme of the day, which in my opinion isn’t what a non-tabloid media organization should be focused on doing. This section includes their “burrito competition,” which could be a fun idea but their bracket apparently neglected sparsely-populated states like New Mexico and Arizona, where the burrito historically originated.

The “economics” section has faced substantial criticism. For example, Ben Casselman’s article, “Typical minimum-wage earners aren’t poor, but they’re not quite middle class,” was criticized in Al-Jazeera America for being based on a single set of data plotting minimum-wage workers by household income. He doesn’t consider the controversial issue of how to measure poverty or the decrease in the real value of the minimum wage, and he ends up undermining the case for raising the minimum wage. Another article about corporate cash hoarding was criticized by Paul Krugman and others for jumping to conclusions based on revised data. As Malcolm Harris (an editor at The New Inquiry) writes, “Data extrapolation is a very impressive trick when performed with skill and grace…but it doesn’t come equipped with the humility we should demand from our writers.”

Their “science” section leaves a lot to be desired. For example, they have a piece assessing health news reports in which the author (Jeff Leek) uses Bayesian priors based on an “initial gut feeling” before assigning numbers to a checklist. As pointed out in this Columbia Journalism Review article, “plenty of people have already produced such checklists—only more thoughtfully and with greater detail…Not to mention that interpreting the value of an individual scientific study is difficult—a subject worthy of much more description and analysis than FiveThirtyEight provides.” And then there was the brouhaha about Roger Pielke, whose writings about the effects of climate change I criticized before, and who’s now left the organization.

Maybe Nate Silver should leave these topics to the experts and stick to covering sports? He does that really well.

Thomas Piketty on Inequality

Let’s briefly consider two more examples. You’ve probably heard about the popular and best-selling analysis of data-driven economics in Thomas Piketty’s magnum opus, Capital in the Twenty-first Century. It’s a long but well-written book in which Piketty makes convincing arguments about how income and wealth inequality are worsening in the United States, France, and other developed countries. (See these reviews in the NY Review of Books and Slate.) It’s influential because of its excellent and systematic use of statistics and data analysis, because of the neglect of wealth inequality by other mainstream economists, and of course because of the economic recession and the dominance of the top 1 percent.

Piketty has been criticized by conservatives, and he has successfully responded to these critics. His proposal for a progressive tax on wealth has also been criticized by some. Perhaps the book’s popularity and the clearly widespread and underestimated economic inequality will result in more discussion and consideration of this and other proposals.

I want to make a different point though. As impressive as Piketty’s book is, we should be careful about how we interpret it and his ideas for reducing inequality. For example, as argued by Russell Jacoby, unlike Marx in Das Kapital, Piketty takes the current system of capitalism for granted. Equality “as an idea and demand also contains an element of resignation; it accepts society, but wants to balance out the goods or privileges…Equalizing pollution pollutes equally, but does not end pollution.” While Piketty’s ideas for reducing economic extremes could be very helpful, they don’t “address a redundant labor force, alienating work, or a society driven by money and profit.” You may or may not agree with Piketty’s starting point—and you do have to start somewhere—but it’s important to keep it in mind when interpreting the results.

As before, just because something is “data-driven” doesn’t mean that the data, analysis, or conclusions can’t be questioned. We always need to be grounded in data, but we need to be careful about how we interpret analyses of them.

HealthMap on Ebola

Harvard’s HealthMap gained attention for using algorithms to detect the beginning of the Ebola outbreak in Africa before the World Health Organization did. Is that a big success for “big data”? Not so, according to Foreign Policy. “It’s an inspirational story that is a common refrain in the ‘big data’ world—sophisticated computer algorithms sift through millions of data points and divine hidden patterns indicating a previously unrecognized outbreak that was then used to alert unsuspecting health authorities and government officials…The problem is that this story isn’t quite true.” By the time HealthMap monitored its very first report, the Guinean government had actually already announced the outbreak and notified the WHO. Part of the problem is that it was published in French, while most monitoring systems today emphasize English-language material.

This seems to be another case of people jumping to conclusions to fit a popular narrative.

What does all this mean for Science?

Are “big data” and “data-driven” science more than just buzzwords? Maybe. But as these examples show, we have to be careful when utilizing them and interpreting their results. When some people conduct various kinds of statistical analyses and data mining, they act as if the data speak for themselves. So their conclusions must be indisputable! But the data never speak for themselves. We scientists and analysts are not simply going around performing induction, collecting every relevant datum around us, and cranking the data through machines.

Every analysis has some assumptions. We all make assumptions about which data to collect, which way to analyze them, which models to use, how to reduce our biases, and how to assess our uncertainties. All machine learning methods, including “unsupervised” learning (in which one tries to find hidden patterns in data), require assumptions. The data definitely do not “drive” one to a particular conclusion. When we interpret someone’s analysis, we may or may not agree with their assumptions, but we should know what they are. And any analyst who does not clearly disclose their assumptions and uncertainties is doing everyone a disservice. Scientists are human and make mistakes, but these are obvious mistakes to avoid. Although objective data-driven science might not be possible, as long as we’re clear about how we choose our data and models and how we analyze them, then it’s still possible to make progress and reach a consensus on some issues and ask new questions on others.

Exploring the “Multiverse” and the Origin of Life

After two weeks away from the blog, I’m back! At the end of July, I attended an interesting event at UC San Diego’s Arthur C. Clarke Center for Human Imagination. (Yes, that’s what it’s called!) The event was a panel discussion entitled, “How Big is the World?: Exploring the Multiverse in Modern Astrophysics, Cosmology, and Beyond” (and you can watch the event here). The three speakers included Andrew Friedman (postdoctoral fellow in astronomy at MIT), Brian Keating (professor of physics in my department at UCSD), and David Brin (Hugo & Nebula Award Winning Author).

The Clarke Center seems to be a unique place with an ambitious program that incorporates a variety of “transdisciplinary” activities. This event fits with their nebulous theme, and the talks and discussions frequently overlapped between science, philosophy of science, and science fiction. I think science and philosophy of science go well together especially when we’re exploring the edges of scientific knowledge, including cosmological astrophysics and the origins of human life. (See my previous post and this recent article on Salon.) Too often astrophysicists, myself included, become very specialized and neglect the “big questions.” Nonetheless, I think we should be careful when we traverse the border between science and science fiction: while it’s exciting to connect them and useful for public outreach, we should mind the gap.

Andrew Friedman focused on the “multiverse”. What is a multiverse, you ask? I’m not entirely clear on it myself, but I’ll try to explain. In the first fraction of a second of the Big Bag, the universe appears to have gone through a phase of accelerated, exponential expansion (called “inflation”) driven by the vacuum energy of one or more quantum fields. The gravitational waves that were recently detected by BICEP2 (in which Brian Keating was involved) appear to support particular inflationary models in which once inflation starts, the process happens repeatedly and in multiple ways. In other words, there may be not one but many universes, including parallel universes—a popular topic in science fiction.

multiverse-magazine-illustration-01_77755_990x742

Inflationary theory solves some problems involving the initial conditions of the Big Bang cosmology, but I’m not so sure that we have—or can ever have—evidence clearly pointing to the existence of multiverses. In addition, in my opinion, Friedman stretched the concept of “universe” to try to argue for the multiverse. He spoke about the fact that there are parts of the universe that are completely inaccessible even if we could go the speed of light, but that doesn’t mean that the inaccessible regions are another universe. It’s fun to think about a “quantum divergence of worlds,” as David Brin referred to it, but quantum mechanics (with the standard Copenhagen interpretation; see this book by Notre Dame professor Jim Cushing) don’t imply a multiverse either: Schrödinger’s live cat and dead cat are not in separate universes. As far as I know, I’m not creating new universes every time I barely miss or catch the train.

The speakers did bring up some interesting questions though about the “anthropic principle” and “fine tuning.” The anthropic principle is a contentious topic that has attracted wide interest and criticism, and if you’re interested, read this review of the literature by Pittsburgh professor John Earman. The anthropic principle is the idea that the physical universe we observe must be compatible with conscious life. It’s a cosmic coincidence that the density of vacuum energy and matter are nearly equal and that the universe’s expansion rate is nearly equal to the critical rate which separates eternal expansion from recontraction, and if the universe were significantly different, it would be impossible to develop conscious life such as humans who can contemplate their own universe. (In the context of the multiverse, there may be numerous universes but only a tiny fraction of them could support life.) It’s important to study the various coincidences and (im)probabilities in physics and cosmology in our universe, but it’s not clear what these considerations explain.

David Brin spoke differently than the others, since he’s more a writer than a scientist, and his part of the discussion was always interesting. He frequently made interesting connections to fiction (such as a legitimate criticism of Walt Whitman’s “Learn’d Astronomer“) and he had a poetic way of speaking; when talking about the possibility of life beyond Earth, he said “If there are living creatures on Titan, they will be made of wax.” He also brought up the “Drake equation,” which is relevant in the context of the topics above. The Drake equation is a probabilistic expression for estimating the number of active, communicating civilizations in our galaxy. It involves a multiplication of many highly uncertain quantities (see this xkcd comic), but it’s nonetheless interesting to think about. The problem is that space is really big—”vastly, hugely, mindbogglingly big,” according to Douglas Adams—so even if there are Vulcans or Klingons or dozens or millions of other civilizations out there, it would take a really really really long time to find them and attempt to communicate with them. We could send people from Earth in a long shuttle ride to visit another civilization, but there’s no guarantee that humanity will still be around when they try to call back. It’s unfortunate, but this is the universe we live in.

How scientists reach a consensus

Following my previous post on paradigm shifts and on how “normal science” occurs, I’d like to continue that with a discussion of scientific consensus. To put this in context, I’m partly motivated by the recent controversy about
Roger Pielke Jr., a professor of environmental studies at the University of Colorado Boulder, who is also currently a science writer for Nate Silver’s FiveThirtyEight website. (The controversy has been covered on Slate, Salon, and Huffington Post.) Silver’s work has been lauded for its data-driven analysis, but Pielke has been accused of misrepresenting data, selectively choosing data, and presenting misleading conclusions about climate change, for example about its effect on disaster occurrences and on the western drought.

This is also troubling in light of a recent article I read by Aklin & Urpelainen (2014), titled “Perceptions of scientific dissent undermine public support for environmental policy.” Based on an analysis of a survey of 1000 broadly selected Americans of age 18-65, they argue that “even small skeptical minorities can have large effects on the American public’s beliefs and preferences regarding environmental regulation.” (Incidentally, a book by Pielke is among their references.) If this is right, then we are left with the question about how to achieve consensus and inform public policy related to important environmental problems. As the authors note, it is not difficult for groups opposed to environmental regulation to confuse the public about the state of the scientific debate. Since it is difficult to win the debate in the media, a more promising strategy would be to increase awareness about the inherent uncertainties in scientific research so that the public does not expect unrealistically high degrees of consensus. (And that’s obviously what I’m trying to do here.)

Already a decade ago, the historian of science Naomi Oreskes (formerly a professor at UC San Diego) in a Science article analyzed nearly 1000 article abstracts about climate change over the previous decade and found that none disagreed explicitly with the notion of anthropogenic global warming–in other words, a consensus appears to have been reached. Not surprisingly, Pielke criticized this article a few months later. In her rebuttal, Oreskes made the point that, “Proxy debates about scientific uncertainty are a distraction from the real issue, which is how best to respond to the range of likely outcomes of global warming and how to maximize our ability to learn about the world we live in so as to be able to respond efficaciously. Denying science advances neither of those goals.”

The short answer to the question, “How do scientists reach a consensus?” is “They don’t.” Once a scientific field has moved beyond a period of transition, the overwhelming majority of scientists adopt at least the central tenets of a paradigm. But even then, there likely will be a few holdouts. The holdouts rarely turn out to be right, but their presence is useful because a healthy and democratic debate about the facts and their interpretation clarifies which aspects of the dominant paradigm are in need of further investigation. The stakes are higher, however, when scientific debate involves contentious issues related to public policy. In those situations, once a scientific consensus appears to be reached and once scientists are sufficiently certain about a particular issue, we want to be able to respond effectively in the short or long term with local, national, or international policies or regulations or moratoria, depending on what is called for. In the meantime, the debates can continue and the policies can be updated and improved.

Of course, it is not always straightforward to determine when a scientific consensus has been reached or when the scientific community is sufficiently certain about an issue. A relevant article here is that of Shwed & Bearman (2010), which was titled “The Temporal Structure of Scientific Consensus Formation.” They refer to “black boxing,” in which scientific consensus allows scientists to state something like “smoking causes cancer” without having to defend it, because it has become accepted by the consensus based on a body of research. Based on an analysis of citation networks, they show that areas considered by expert studies to have little rivalry have “flat” levels of modularity, while more controversial ones show much more modularity. “If consensus was obtained with fragile evidence, it will likely dissolve with growing interest, which is what happened at the onset of gravitational waves research.” But consensus about climate change was reached in the 1990s. Climate change skeptics (a label which may or may not apply to Pielke) and deniers can cultivate doubt in the short run, but they’ll likely find themselves ignored in the long run.

Finally, I want to make a more general point. I often talk about how science is messy and nonlinear, and that scientists are human beings with their own interests and who sometimes make mistakes. As stated by Steven Shapin (also formerly a professor at UC San Diego) in The Scientific Revolution, any account “that seeks to portray science as the contingent, diverse, and at times deeply problematic product of interested, morally concerned, historically situated people is likely to be read as criticism of science…Something is being criticized here: it is not science but some pervasive stories we tend to be told about science” (italics in original). Sometimes scientific debates aren’t 100% about logic and data and it’s never really possible to be 0% biased. But the scientific method is the most reliable and respected system we’ve got. (A few random people might disagree with that, but I think they’re wrong.)

Paradigm Shifts?

In addition to physics and astronomy, I used to study philosophy of science and sociology. In my opinion, many scientists could learn a few things from sociologists and philosophers of science, to help them to better understand and consider how scientific processes work, what influences them and potentially biases scientific results, and how science advances through their and others’ work. In addition, I think that people who aren’t professional scientists (who we often simply call “the public”) could better understand what we are learning and gaining from science and how scientific results are obtained. I’ll just write a few ideas here and we can discuss these issues further later, but my main point is this: science is an excellent tool that sometimes produces important results and helps us learn about the universe, our planet, and ourselves, but it can be a messy and nonlinear process, and scientists are human–they sometimes make mistakes and may be stubborn about abandoning a falsified theory or interpretation. The cleanly and clearly described scientific results in textbooks and newspaper articles are misleading in a way, as they sometimes make us forget the long, arduous, and contentious process through which those results were achieved. To quote from Carl Sagan (in Cosmos), who inspired the subtitle of this blog (the “pale blue dot” reference),

[Science] is not perfect. It can be misused. It is only a tool. But it is by far the best tool we have, self-correcting, ongoing, applicable to everything. It has two rules. First: there are no sacred truths; all assumptions must be critically examined; arguments from authority are worthless. Second: whatever is inconsistent with the facts must be discarded or revised.

As you may know, the title of this post refers to Thomas Kuhn (in his book, The Structure of Scientific Revolutions). “Normal science” (the way science is usually done) proceeds gradually and is based on paradigms, which are collections of diverse elements that tell scientists what experiments to perform, which observations to make, how to modify their theories, how to make choices between competing theories and hypotheses, etc. We need a paradigm to demarcate what is science and to distinguish it from pseudo-science. Scientific revolutions are paradigm shifts, which are relatively sudden and unstructured events, and which often occur because of a crisis brought about by the accumulation of anomalies under the prevailing paradigm. Moreover, they usually cannot be decided by rational debate; paradigm acceptance via revolution is essentially a sociological phenomenon and is a matter of persuasion and conversion (according to Kuhn). In any case, it’s true that some scientific debates, especially involving rival paradigms, are less than civil and rational and can look something like this:
calvin_arguing

I’d like to make the point that, at conferences and in grant proposals, scientists (including me) pretend that we are developing research that is not only cutting edge but is also groundbreaking and Earth-shattering; some go so far as to claim that they are producing revolutionary (or paradigm-shifting) research. Nonetheless, scientific revolutions are actually extremely rare. Science usually advances at a very gradual pace and with many ups and downs. (There are other reasons to act like our science is revolutionary, however, since this helps to gain media attention and perform outreach in the public, and it helps policy-makers to justify investments in basic research in science.) When a scientist or group of scientists does obtain a critically important result, it is usually the case that others have already produced similar results, though perhaps with less precision. Credit often goes to a single person who packaged and advertised their results well. For example, many scientists are behind the “Higgs boson” discovery, and though American scientists received the Nobel Prize for detecting anisotropies in the cosmic microwave background with the COBE satellite, Soviets actually made an earlier detection with the RELIKT-1 experiment.

einstein-bohr

Let’s briefly focus on the example of quantum mechanics, in which there were intense debates intense debates in the 1920s about (what appeared to be) “observationally equivalent” interpretations, which in a nutshell were either probabilistic or deterministic and realist ones. My favorite professor at Notre Dame, James T. Cushing, wrote a provocative book on the subject with the subtitle, “Historical Contingency and the Copenhagen Hegemony“. The debates occurred between Neils Bohr’s camp (with Heisenberg, Pauli, and others, who were primarily based in Copenhagen and Göttingen) and Albert Einstein’s camp (with Schrödinger and de Broglie). Bohr’s younger followers were trying to make bold claims about QM and to make names for themselves, and one could argue that they misconstrued Einstein’s views. Einstein had essentially lost by the 1930s, in which the nail in the coffin was von Neumann’s so-called impossibility proof of “hidden variables” theories–a proof that was shown to be false thirty years later. In any case, Cushing argues that in decisions about accepting or dismissing scientific theories, sometimes social conditions or historical coincidences can play a role. Mara Beller also wrote an interesting book about this (Quantum Dialogue: The Making of a Revolution), and she finds that in order to understand the consolidation of the Copenhagen interpretation, we need to account for the dynamics of the Bohr et al. vs. Einstein et al. struggle. (In addition to Cushing and Beller, another book by Arthur Fine, called The Shaky Game, is also a useful reference.) I should also point out that Bohr used the rhetoric of “inevitability” which implied that there was no plausible alternative to the Copenhagen paradigm. If you can convince people that your view is already being adopted by the establishment, then the battle has already been won.

More recently, we have had other scientific debates about rival paradigms, such as in astrophysics, the existence of dark matter (DM) versus modified Newtonian dynamics (MOND); DM is more widely accepted, though its nature–whether it is “cold” or “warm” and to what extent it is self-interacting–is still up for debate. Debates in biology, medicine, and economics, are often even more contentious, partly because they have policy implications and can conflict with religious views.

Other relevant issues include the “theory-ladenness of observation”, the argument that everything one observes is interpreted through a prior understanding (and assumption) of other theories and concepts, and the “underdetermination of theory by data.” The concept of underdetermination dates back to Pierre Duhem and W. V. Quine, and it refers to the argument that given a body of evidence, more than one theory may be consistent with it. A corollary is that when a theory is confronted with recalcitrant evidence, the theory is not falsified, but instead, it can be reconciled with the evidance by making suitable adjustments to its hypotheses and assumptions. It is nonetheless the case that some theories are clearly better than others. According to Larry Laudan, we should not overemphasize the role of sociological factors over logic and the scientific method.

In any case, all of this has practical implications for scientists as well as for science journalists and for people who popularize science. We should be careful to be aware of, examine, and test our implicit assumptions; we should examine and quantify all of our systematic uncertainties; and we should allow for plenty of investigation of alternative explanations and theories. In observations, we also should be careful about selection effects, incompleteness, and biases. Finally, we should remember that scientists are human and sometimes make mistakes. Scientists are trying to explore and gain knowledge about what’s really happening in the universe, but sometimes other interests (funding, employment, reputation, personalities, conflicts of interest, etc.) play important roles. We must watch out for herding effects and confirmation bias, where we converge and end up agreeing on the incorrect answer. (Historical examples include the optical or electromagnetic ether; the crystalline spheres of medieval astronomy; the humoral theory of medicine; ‘catastrophist’ geology; etc.) Paradigm shifts are rare, but when we do make such a shift, let’s be sure that what we’re transitioning to is actually our currently best paradigm.

[For more on philosophy of science, this anthology is a useful reference, and in particular, I recommend reading work by Imre Lakatos, Paul Feyerabend, Helen Longino, Nancy Cartwright, Bas van Fraassen, Mary Hesse, and David Bloor, who I didn’t have the space to write about here. In addition, others (Ian Hacking, Allan Franklin, Andrew Pickering, Peter Galison) have written about these issues in scientific observations and experimentation. For more on the sociology of science, this webpage seems to contain useful references.]