One Man’s Perspective on Diversity and Inequality in Science

[This blog was originally posted at the Women in Astronomy blog. Thanks to Jessica Kirkpatrick for editing assistance.]

It’s obvious, but one thing I’ve noticed over my career so far is that many departments, institutions, conferences, organizations, committees, high-profile publications, big research grants, etc., both nationally and internationally, and especially leadership positions, are filled with straight, white, men. There are notable and impressive exceptions, but the trend is clear. The distributions of people in the scientific workforce clearly don’t reflect their distribution in the overall population. For example, according to the AAS’s Committee on the Status of Women in Astronomy, nearly half of undergraduate students who obtain bachelors of science degrees are women, but only a third of astronomy graduate students and 30% of Ph.D. recipients are. Women compose 25-30% of postdocs and lower-level faculty, and this drops by half (to 15%) of tenured faculty. This is not explained by historical differences in gender: if women were promoted and retained at rates comparable to men, then the fractions advancing to higher career stages should be equal. The demographics in terms of race aren’t good either: according to the American Institute of Physics, African Americans and Hispanics combined account for only 5% of physics faculty.

Of course, this isn’t news to readers of this blog. And the disturbing lack of diversity doesn’t just affect us in astronomy and astrophysics or even just in the physical sciences. For example, as you’ve probably seen, the lack of diversity in Silicon Valley has deservedly been in the news lately. Tech companies like Google, Yahoo, Facebook, LinkedIn, and Twitter have all been criticized for being dominated by white men (and recently, also Asian men). We definitely need to work more at improving diversity in all STEM fields.

I’m a half-white half-Iranian man in astronomy and astrophysics. Everything is competitive these days, but in my opinion, if I’m applying for a job or a grant, for example, and if a black or Latino person or a woman with the same experience and qualifications as me has also applied, I think she should probably get it. While the grant and job markets in STEM fields are very competitive, I think we should look at the big picture, in which we need to strive for more equality in our universities and institutions. It’s also important to keep in mind that both men and women leave academia (though at different rates) and find important and fulfilling careers elsewhere.

I’d also like to point out that, in Iran, STEM fields are not seen as “male” subjects as much as they are in the US and they therefore have almost gender parity in these fields. For example, 70% of Iranian science and engineering students are women and when I last visited Tehran, I met many brilliant female Iranian physics students who could speak about science in both Farsi and English. And in recent news, Maryam Mirzakhani recently became the first woman to win the Fields Medal mathematics prize, and Azeen Ghorayshi won the Clark/Payne award for science journalists. (She recently wrote a story for Newsweek about smog in Tehran, which keeps getting worse.)

In any case, we all benefit—and science benefits—when we have a diverse community. A more diverse workforce, including among leadership positions, helps to produce new ideas and perspectives and to guard against bias. In business, when there is more diversity, everyone profits.

Speaking of bias, how can we deal with “unconscious bias”? In practice, two applicants are never identical, so we have to assess people on a case-by-case basis. It turns out that both men and women surprisingly have similar biases against women in STEM fields, though the biases can be reduced when people are more aware of them and when diverse committees make decisions about hiring and leadership decisions. In addition, diversity and racial equity should be considered at both the initial and shortlist stages of admissions and hiring, and the academic tenure system should be more flexible.

On the issue of leadership and mentorship, I’ve had female and male bosses and mentors and I’ve advised male and female students; in addition, I’ve worked with and for people from many different countries and backgrounds. As far as I can tell, it hasn’t made a difference for me, and I’ve seen a greater variety of management and collaboration styles among men and women than differences between them (though I’ve seen more overconfident men than women). Unfortunately though, some people still do view female leaders differently and hold them to different standards than male leaders.

So what can we do? Many people who are trying to improve this situation are doing excellent work on public outreach and educational programs especially with the purpose of reaching and encouraging girls, minorities, and underprivileged youth. It’s not hard to find such programs everywhere: for example, I recently participated in the Adler Planetarium Astrojournalists program and in a physics outreach program at UCSD with Intertribal Youth organized by Adam Burgasser. The success of such programs, as well as the growing numbers of role models, help to gradually changing cultural stereotypes and reducing biases. For those of us at research institutions, work on outreach and communication should be valued as much as research achievements when hiring and tenure decisions are made. One way to do this would be to explicitly state this in job advertisements and to hiring and tenure committees at our own institutions. It would require more work from such committees, but that’s a small price to pay.

The literature on “confidence gap” issues has been growing rapidly, encouraging women to “lean in” and be more confident and self-assured at work. This is important, but it amounts to encouraging women to behave more like men. We can’t neglect persistent structural and institutionalized barriers and we can’t forget that gender and race inequality is everywhere or that men and white people are benefit from their privileges. (And although class is a separate issue, it’s worth pointing out that there is also a lack of class diversity within higher education and graduate programs. According to the US Census Bureau, the income discrepancy between the working class and the professional class with higher academic degrees is growing, so this problem is getting worse.)

Increasing gender and race diversity is an important goal and an ongoing struggle, but it’s also a means to an end. I believe that we need to set our sights higher than merely having a few more white women and people of color among faculty members. We need *paid* maternity leave, universal or child-care options, better dual-career policies, and paternity leave should be expected. We shouldn’t praise men who continue working full-time rather than spending time with their new children, and we still have a long way to go until men share housework equally with women. Work-life balance issues affect everyone, including single people and those without children. We can advocate for new policies at our own institutions, and city- or state-wide or even national policies would make a huge difference. As a potential step in that direction, even Congress is aware of these issues: in the America COMPETES Act (which funds STEM R&D, education, and innovation) reauthorization it’s now directing the Office of Science & Technology Policy (OSTP) develop and implement guidelines for policies that encourage work-life balance, workplace flexibility, and family-responsiveness. In any case, we’re gradually making progress, but much more work remains to be done.

Is “Data-driven Science” an Oxymoron?

In recent years, we’ve been repeatedly told that we’re living and working in an era of Big Data (and Big Science). We’ve heard how Nate Silver and others are revolutionizing how we analyze and interpret data. In many areas of science and in many aspects of life, for that matter, we’re obtaining collections of datasets so large and complex that it becomes necessary to change our traditional analysis methods. Since the volume, velocity, and variety of data are rapidly increasing, it is increasingly important to develop and apply appropriate techniques and statistical tools.

However, is it true that Big Data changes everything? Much can be gained from proper data analysis and from “data-driven science.” For example, the popular story about Billy Beane and Moneyball shows how Big Data and statistics transformed how baseball teams are assessed. But I’d like to point out some misconceptions and dangers of the concept of data-driven science.

Governments, corporations, and employers are already collecting (too?) much of our precious, precious data and expending massive effort to study it. We might worry about this because of concerns of privacy, but we should also worry about what might happen to analyses that are excessively focused on the data. There are questions that we should be asking more often: Who’s collecting the data? Which data and why? Which analysis tools and why? What are their assumptions and priors? My main point will be that the results from computer codes churning through massive datasets are not objective or impartial, and the data don’t inevitably drive us to a particular conclusion. This is why the concept of “data-driven” anything is misleading.

Let’s take a look at a few examples of data-driven analysis that have been in the news lately…

Nate Silver and FiveThirtyEight

Many media organizations are about something, and they use a variety of methods to study it. In a sense, FiveThirtyEight isn’t really about something. (If I wanted to read about nothing, I’d check out ClickHole and be more entertained.) Instead, FiveThirtyEight is about their method, which they call “data journalism” and by which they mean “statistical analysis, but also data visualization, computer programming and data-literate reporting.”

I’m exaggerating though. They cover broad topics related to politics, economics, science, life, and sports. They’ve had considerable success making probabilistic predictions about baseball, March Madness, and World Cup teams and in packaging statistics in a slick and easy-to-understand way. They also successfully predicted the 2012 US elections on a state-by-state basis, though they stuck to the usual script of treating it as a horse race: one team against another. Their statistical methods are sometimes “black boxes”, but if you look, they’ll often provide additional information about them. Their statistics are usually sound, but maybe they should be more forthcoming about the assumptions and uncertainties involved.

Their “life” section basically allows them to cover whatever they think is the popular meme of the day, which in my opinion isn’t what a non-tabloid media organization should be focused on doing. This section includes their “burrito competition,” which could be a fun idea but their bracket apparently neglected sparsely-populated states like New Mexico and Arizona, where the burrito historically originated.

The “economics” section has faced substantial criticism. For example, Ben Casselman’s article, “Typical minimum-wage earners aren’t poor, but they’re not quite middle class,” was criticized in Al-Jazeera America for being based on a single set of data plotting minimum-wage workers by household income. He doesn’t consider the controversial issue of how to measure poverty or the decrease in the real value of the minimum wage, and he ends up undermining the case for raising the minimum wage. Another article about corporate cash hoarding was criticized by Paul Krugman and others for jumping to conclusions based on revised data. As Malcolm Harris (an editor at The New Inquiry) writes, “Data extrapolation is a very impressive trick when performed with skill and grace…but it doesn’t come equipped with the humility we should demand from our writers.”

Their “science” section leaves a lot to be desired. For example, they have a piece assessing health news reports in which the author (Jeff Leek) uses Bayesian priors based on an “initial gut feeling” before assigning numbers to a checklist. As pointed out in this Columbia Journalism Review article, “plenty of people have already produced such checklists—only more thoughtfully and with greater detail…Not to mention that interpreting the value of an individual scientific study is difficult—a subject worthy of much more description and analysis than FiveThirtyEight provides.” And then there was the brouhaha about Roger Pielke, whose writings about the effects of climate change I criticized before, and who’s now left the organization.

Maybe Nate Silver should leave these topics to the experts and stick to covering sports? He does that really well.

Thomas Piketty on Inequality

Let’s briefly consider two more examples. You’ve probably heard about the popular and best-selling analysis of data-driven economics in Thomas Piketty’s magnum opus, Capital in the Twenty-first Century. It’s a long but well-written book in which Piketty makes convincing arguments about how income and wealth inequality are worsening in the United States, France, and other developed countries. (See these reviews in the NY Review of Books and Slate.) It’s influential because of its excellent and systematic use of statistics and data analysis, because of the neglect of wealth inequality by other mainstream economists, and of course because of the economic recession and the dominance of the top 1 percent.

Piketty has been criticized by conservatives, and he has successfully responded to these critics. His proposal for a progressive tax on wealth has also been criticized by some. Perhaps the book’s popularity and the clearly widespread and underestimated economic inequality will result in more discussion and consideration of this and other proposals.

I want to make a different point though. As impressive as Piketty’s book is, we should be careful about how we interpret it and his ideas for reducing inequality. For example, as argued by Russell Jacoby, unlike Marx in Das Kapital, Piketty takes the current system of capitalism for granted. Equality “as an idea and demand also contains an element of resignation; it accepts society, but wants to balance out the goods or privileges…Equalizing pollution pollutes equally, but does not end pollution.” While Piketty’s ideas for reducing economic extremes could be very helpful, they don’t “address a redundant labor force, alienating work, or a society driven by money and profit.” You may or may not agree with Piketty’s starting point—and you do have to start somewhere—but it’s important to keep it in mind when interpreting the results.

As before, just because something is “data-driven” doesn’t mean that the data, analysis, or conclusions can’t be questioned. We always need to be grounded in data, but we need to be careful about how we interpret analyses of them.

HealthMap on Ebola

Harvard’s HealthMap gained attention for using algorithms to detect the beginning of the Ebola outbreak in Africa before the World Health Organization did. Is that a big success for “big data”? Not so, according to Foreign Policy. “It’s an inspirational story that is a common refrain in the ‘big data’ world—sophisticated computer algorithms sift through millions of data points and divine hidden patterns indicating a previously unrecognized outbreak that was then used to alert unsuspecting health authorities and government officials…The problem is that this story isn’t quite true.” By the time HealthMap monitored its very first report, the Guinean government had actually already announced the outbreak and notified the WHO. Part of the problem is that it was published in French, while most monitoring systems today emphasize English-language material.

This seems to be another case of people jumping to conclusions to fit a popular narrative.

What does all this mean for Science?

Are “big data” and “data-driven” science more than just buzzwords? Maybe. But as these examples show, we have to be careful when utilizing them and interpreting their results. When some people conduct various kinds of statistical analyses and data mining, they act as if the data speak for themselves. So their conclusions must be indisputable! But the data never speak for themselves. We scientists and analysts are not simply going around performing induction, collecting every relevant datum around us, and cranking the data through machines.

Every analysis has some assumptions. We all make assumptions about which data to collect, which way to analyze them, which models to use, how to reduce our biases, and how to assess our uncertainties. All machine learning methods, including “unsupervised” learning (in which one tries to find hidden patterns in data), require assumptions. The data definitely do not “drive” one to a particular conclusion. When we interpret someone’s analysis, we may or may not agree with their assumptions, but we should know what they are. And any analyst who does not clearly disclose their assumptions and uncertainties is doing everyone a disservice. Scientists are human and make mistakes, but these are obvious mistakes to avoid. Although objective data-driven science might not be possible, as long as we’re clear about how we choose our data and models and how we analyze them, then it’s still possible to make progress and reach a consensus on some issues and ask new questions on others.

Rosetta and the Comet

The title sounds like I’ll tell you a fable or short story or something. This is neither of those things, but it is quite a story! I’m not personally involved in the Rosetta mission, though I’ll do my best to tell you about it and what’s unique and exciting about this. (For you fellow astrophysicists reading this, if I’ve missed or misstated anything, please let me know.) And if you’d like more information and updates, I recommend looking at Emily Lakdawalla‘s blog posts on ESA and Phil Plait‘s blog on Slate. If you’re interested in the history and importance of comets (and about how “we’re made of starstuff”), check out Carl Sagan and Ann Druyan’s book, Comet.

Rosetta, the €1.3 billion flagship space probe (see below) of the European Space Agency (NASA’s European counterpart) has chosen to accept an ambitious mission: to chase down, intercept, and orbit a distant comet, and then send the lander Philae to “harpoon” itself to the surface and engage in a detailed analysis. Rosetta is obviously named after the Rosetta Stone in Egyptian history, and Philae is named after an island in the Nile. Rosetta and Philae are hip spacecraft: they even have their own Twitter accounts—@ESA_Rosetta and @Philae2014, respectively. They should be careful when examining the comet below its surface, because if it’s anything like Star Trek, they could find an ancient alien archive in the center! (Fans of the “Masks” episode will know what I’m talking about.)

Science Magazine

Comets are literally pretty cool. They’re clumps of ice, dust, and organic materials with tails that are hurtling through space. What is this comet Rosetta’s pursuing? It’s known as Comet 67P/Churyumov-Gerasimenko, named after a pair of Ukrainian astronomers who discovered it in 1969. 67P/C-G looks like a mere blob from a distance, but it’s 4km in diameter and lopsided with two barely-attached lobes that make it look like a rubber duck from certain angles. “It may be an object we call a contact binary which was created when two smaller comets merged after a low-velocity collision,” said mission scientist Matt Taylor, or it may have once been a spherical object that lost much of its volatile material after encounters with the sun. It also has plumes of dust and gas (from sublimated ices) erupting from the surface, which has a temperature of about -70 C. (The montage of images below are courtesy of ESA/Rosetta/NAVCAM/Emily Lakdawalla.)

20140806_NavCam_animation_6_August_selection_stack

rosetta_osiris_aug72014.jpg.CROP.original-original

Comets tell us about our past, since they’re thought to have formed in the cold of the outer solar system 4.6 billion years ago. They also yield information about the formation of the solar system and about the role of comets in delivering water and organic material to Earth in its history—possibly influencing the origin of life here. Cometary impacts are known to have been much more common in the early solar system than today. There may be billions of these dirty snowballs (or icy dustballs) orbiting the sun, and thousands of them have been observed. Prior to Rosetta, three comets have been analyzed by space probes: Halley’s comet by ESA’s Giotto in 1986, Comet Wild 2 by NASA’s Stardust in 2004, and Comet Tempel 1 by NASA’s Deep Impact, which slammed into it in 2005. The diagram below (courtesy of ESA/Science journal) shows the orbits of Rosetta and 67P/C-G. The comet has been traveling at speeds up to 135,000 km/hr, and Rosetta had to use flybys of the Earth and Mars to maneuver onto the same orbital path. Rosetta will be the first mission ever to orbit and land on a comet, so this is really an historic moment in space exploration.

F4.large

On 11 November, Rosetta will be in a position to eject the Philae lander from only a couple kilometers away. Philae is 100 kg, box shaped with three legs and numerous instruments for experiments (see below), and was provided by the German Aerospace Research Institute (DLR). NASA scientists talk of the “7 minutes of terror” as the Curiosity rover descended to Mars, but Philae’s descent will take hours. Note that 67P is so small and gravity is so weak that the lander would likely bounce off, which is why it needs the harpoons as well as screws on the legs to bolt it to the surface. If the landing is successful—let’s cross our fingers that it is—it will perform many interesting experiments with its instruments. For example, CONSERT will use radio waves to construct a 3D model of the nucleus, Ptolemy will measure the abundance of water and heavy water, and COSAC will look for long-chain organic molecules and amino acids. COSAC will also detect the chirality of the molecules and maybe determine whether amino acids are always left-handed like the ones on Earth. (“Chirality” means “handedness”. I think the only other time I heard the term was for the spin statistics of spiral galaxies.)

Philae_s_instruments_black_background_node_full_image_2

Let’s hope for Rosetta’s and Philae’s success! I’ll update you on this blog when I hear more information.

A few thoughts on the peer-review process

How does the peer-review process work? How do scientists critically review each others’ work to ensure that the most robust results and thorough analyses are published and that only the best research proposals are awarded grants? How do scientists’ papers and articles change between submission and publication? It’s a process that has advantages and shortcomings, and maybe it’s time for us as a community to try to improve it. (I’m not the only person who’s written about this stuff, and you may be interested in other scientists’ perspectives, such as Sarah Kendrew, Andrew Pontzen, and Kelle Cruz. This blog on Guardian has interesting relating posts too.)

For us scientists, writing about our scientific research and writing proposals for planned research is a critically important aspect of the job. The ubiquity of the “publish or perish” maxim highlights its significance for advancing one’s career. Publishing research and evaluating and responding to others’ publications are crucial for scientists to try to debate and eventually reach a consensus on particular issues. We want to make sure that we are learning something new and converging on important ideas and questions rather than being led astray by poorly vetted results. Therefore, we want to make the peer-review process as effective and efficient as possible.

Female researcher taking notes

For readers unfamiliar with the process, it basically goes like this: scientist Dr. A and her colleagues are working on a research project. They obtain a preliminary result—which may be a detection of something, the development of a new model, the refutation of a previously held assumption, etc.—which they test and hone until they have something they deem publishable. Then Dr. A’s group write a paper explaining the research they conducted (so that it potentially could be repeated by an independent group) and lay out their arguments and conclusions while putting them in the context of other scientists’ work. If they can put together a sufficiently high-quality paper, they then submit it to a journal. An independent “referee” then reviews the work and writes a report. (Science is an international enterprise, so like the World Cup, referees can come from around the world.) The paper goes through a few or many iterations between the authors and referee(s) until it is either rejected or accepted for publication, and these interactions may be facilitated by an editor. At that point, the paper is typeset and the authors and copy editors check that the proof is accurate, and then a couple months later the paper is published online and in print.

(In my fields, people commonly publish their work in the Astrophysical Journal, Monthly Notices of the Royal Astronomical Society, Astronomy & Astrophysics, Physical Review D, and many others, including Nature, where they publish the most controversial and provocative results, which sometimes turn out later to be wrong.)

In general, this system works rather well, but there are inherent problems to it. For example, authors are dependent on the whim of a single referee, and some referees do not spend enough time and effort when reviewing papers and writing reports for authors. On the other hand, sometimes authors do not write sufficiently clearly or do not sufficiently double-check all of their work before submitting a paper. Also, sometimes great papers can be delayed for long periods because of nitpicking or unpunctual referees, while other papers may appear about they were not subjected to much critical scrutiny, though these things are often subjective and depend on one’s perspective.

There are other questions that are worthwhile discussing and considering. For example, how should a scientific editor select an appropriate referee to review a particular paper? When should a referee choose to remain anonymous or not? How should authors, referees, and editors deal with language barriers? What criteria should we use for accepting or rejecting a paper, and in a dispute, when and in what way should an editor intervene?

Some authors post their papers online for the community on arXiv.org (the astronomy page is here) before publication while others wait until a paper is in press. It’s important to get results out to the community, especially for junior scientists early in their careers. The early online posting of papers can yield interesting discussions and helpful feedback which can improve the quality of a paper before it is published. On the other hand, some of these discussions can be premature; some papers evolve significantly and quantitative and qualitative conclusions can change while a paper is being revised in the referee process. It is easy to jump to conclusions or to waste time with a paper that still needs further revision and analysis or maybe even is fundamentally flawed. Of course, this can also be said about some published papers as well.

implications for science journalists

These issues are also important to consider when scientists and journalists communicate and when journalists write or present scientific achievements or discoveries. Everyone is pressed for time, and journalists are under pressure to write copy within strict deadlines, but it’s very important to critically review the relevant science whenever possible. Also, in my opinion, it’s a good idea for journalists to talk to a scientists colleagues and competitors to try to learn about multiple perspectives and to determine which issues might be contentious. We should also keep in mind that achievements and discoveries are rarely accomplished by a single person but by a collaboration and were made possible by the work of other scientists upon which they’ve built. (Isaac Newton once said, “If I have seen further it is by standing on the shoulders of giants.”)

Furthermore, while one might tweet about a new unpublished scientific result, for more investigative journalism, it’s better of course to avoid rushing the analysis. We all like to learn about and comment on that new scientific study that everyone’s talking about, but unfortunately people will generally pay most attention to what they hear first rather than retractions or corrections that might be issued later on. We’re living in a fast-paced society and there is often demand for a quick turnaround for “content”, but the scientific enterprise goes on for generations—a much longer time-scale than the meme of the week.

improving the peer-review process

And how can this peer-review system be improved? I’ve heard a variety of suggestions, some of which are probably worthwhile to experiment with. We could consider having more than one person review papers, with the extra referees providing an advisory role. We could consider paying scientists for fulfilling their refereeing duties. We could make it possible for the scientific to comment on papers on the arXiv (or VoxCharta or elsewhere), thus making these archives of papers and proceedings more like social media (or rather like a “social medium”, but I never hear anyone say that).

Another related issue is that of “open-access journals” as opposed to journals that have paywalls making papers inaccessible to people. Public access to scientific research is very important, and there are many advantages of promoting open journals and of scientists using them more often. Scientists (including me) should think more seriously about how we can move in that direction.

How scientists reach a consensus

Following my previous post on paradigm shifts and on how “normal science” occurs, I’d like to continue that with a discussion of scientific consensus. To put this in context, I’m partly motivated by the recent controversy about
Roger Pielke Jr., a professor of environmental studies at the University of Colorado Boulder, who is also currently a science writer for Nate Silver’s FiveThirtyEight website. (The controversy has been covered on Slate, Salon, and Huffington Post.) Silver’s work has been lauded for its data-driven analysis, but Pielke has been accused of misrepresenting data, selectively choosing data, and presenting misleading conclusions about climate change, for example about its effect on disaster occurrences and on the western drought.

This is also troubling in light of a recent article I read by Aklin & Urpelainen (2014), titled “Perceptions of scientific dissent undermine public support for environmental policy.” Based on an analysis of a survey of 1000 broadly selected Americans of age 18-65, they argue that “even small skeptical minorities can have large effects on the American public’s beliefs and preferences regarding environmental regulation.” (Incidentally, a book by Pielke is among their references.) If this is right, then we are left with the question about how to achieve consensus and inform public policy related to important environmental problems. As the authors note, it is not difficult for groups opposed to environmental regulation to confuse the public about the state of the scientific debate. Since it is difficult to win the debate in the media, a more promising strategy would be to increase awareness about the inherent uncertainties in scientific research so that the public does not expect unrealistically high degrees of consensus. (And that’s obviously what I’m trying to do here.)

Already a decade ago, the historian of science Naomi Oreskes (formerly a professor at UC San Diego) in a Science article analyzed nearly 1000 article abstracts about climate change over the previous decade and found that none disagreed explicitly with the notion of anthropogenic global warming–in other words, a consensus appears to have been reached. Not surprisingly, Pielke criticized this article a few months later. In her rebuttal, Oreskes made the point that, “Proxy debates about scientific uncertainty are a distraction from the real issue, which is how best to respond to the range of likely outcomes of global warming and how to maximize our ability to learn about the world we live in so as to be able to respond efficaciously. Denying science advances neither of those goals.”

The short answer to the question, “How do scientists reach a consensus?” is “They don’t.” Once a scientific field has moved beyond a period of transition, the overwhelming majority of scientists adopt at least the central tenets of a paradigm. But even then, there likely will be a few holdouts. The holdouts rarely turn out to be right, but their presence is useful because a healthy and democratic debate about the facts and their interpretation clarifies which aspects of the dominant paradigm are in need of further investigation. The stakes are higher, however, when scientific debate involves contentious issues related to public policy. In those situations, once a scientific consensus appears to be reached and once scientists are sufficiently certain about a particular issue, we want to be able to respond effectively in the short or long term with local, national, or international policies or regulations or moratoria, depending on what is called for. In the meantime, the debates can continue and the policies can be updated and improved.

Of course, it is not always straightforward to determine when a scientific consensus has been reached or when the scientific community is sufficiently certain about an issue. A relevant article here is that of Shwed & Bearman (2010), which was titled “The Temporal Structure of Scientific Consensus Formation.” They refer to “black boxing,” in which scientific consensus allows scientists to state something like “smoking causes cancer” without having to defend it, because it has become accepted by the consensus based on a body of research. Based on an analysis of citation networks, they show that areas considered by expert studies to have little rivalry have “flat” levels of modularity, while more controversial ones show much more modularity. “If consensus was obtained with fragile evidence, it will likely dissolve with growing interest, which is what happened at the onset of gravitational waves research.” But consensus about climate change was reached in the 1990s. Climate change skeptics (a label which may or may not apply to Pielke) and deniers can cultivate doubt in the short run, but they’ll likely find themselves ignored in the long run.

Finally, I want to make a more general point. I often talk about how science is messy and nonlinear, and that scientists are human beings with their own interests and who sometimes make mistakes. As stated by Steven Shapin (also formerly a professor at UC San Diego) in The Scientific Revolution, any account “that seeks to portray science as the contingent, diverse, and at times deeply problematic product of interested, morally concerned, historically situated people is likely to be read as criticism of science…Something is being criticized here: it is not science but some pervasive stories we tend to be told about science” (italics in original). Sometimes scientific debates aren’t 100% about logic and data and it’s never really possible to be 0% biased. But the scientific method is the most reliable and respected system we’ve got. (A few random people might disagree with that, but I think they’re wrong.)

Citizen Science: a tool for education and outreach

I’ll write about a different kind of topic today. “Citizen science” is a relatively new term though the activity itself is not so new. One definition of citizen science is “the systematic collection and analysis of data; development of technology; testing of natural phenomena; and the dissemination of these activities by researchers on a primarily avocational basis.” It involves public participation and engagement in scientific research in a way that educates the participants, makes the research more democratic, and makes it possible to perform tasks that a small number of researchers could not accomplish alone. Volunteers simply need access to a computer (or smartphone) and an internet connection to become involved and assist scientific research.

example_face_on_spiral

Citizen science was popularized a few years ago by Galaxy Zoo, which involved visually classifying hundreds of thousands of galaxies into spirals, ellipticals, mergers, and finer classifications using the classification tree below. (I am a member of the Galaxy Zoo collaboration and have published a few papers with them.) As a result of “crowdsourcing” the work of more than 100,000 volunteers around the world, new scientific research can be done that was not previously possible with such large datasets, including studies of the handedness of spiral galaxies, analyses of the environmental dependence of barred galaxies, and the identification of rare objects such as a quasar light echo that was dubbed “Hanny’s Voorwerp”. Other citizen science projects include mapping the moon, mapping air pollution, counting birds with birdwatchers, classifying a variety of insects, and many other projects.

Willettetal13_Fig1

Citizen scientists have many motivations, but it appears that the primary one is the desire to make a contribution to scientific research (see this paper). In the process, by bringing together professional scientists and members of the general public and facilitating interactions between them, citizen science projects are important for outreach purposes, not just for research. In addition, by encouraging people to see a variety of images or photographs and to learn about how the research is done, citizen science is useful for education as well. Many valuable educational tools have been produced (such as by the Zooniverse projects). Citizen science projects are popular and proliferating because they give the opportunity for people at home or in the classroom to become actively involved in science. It has other advantages too, including raising awareness and stimulating interest in particular issues. Citizen science is continuing to evolve, and in the era of “big data” and social media, it has much potential and room for improvement.