Thoughts on the Academic Job Market in the Physical Sciences

I decided to add “Thoughts on…” at the beginning of the title to emphasize that, although I’ll present some facts, I’ll be expressing my personal opinions on the academic job market. These are my “2 cents”, and some people may disagree with them. And though there are some similar issues and concerns in the social sciences and humanities, most of my experience comes from the physical sciences, especially physics and astronomy, and I’ll focus on that. If you don’t have the time to read the whole post, my main (and obvious) point is this: for a number of reasons, the job market has been getting worse over the past decade or more, with detrimental effects to scientific research and education (and to scientists, educators, and students). This is just a brief intro to the issues involved, and I’m not sure what the best solutions might look like, but I’ll try to write about that more in another post.

Soft Money

For people with Ph.D.’s, in the past, they’d decide upon earning their degree (or earlier) whether to proceed with the “traditional” academic career or shift to another kind of career. Those who continue would consider moving to a tenure-track faculty or other long-term position at a college, university, or other institution. With the growth of “soft money, a euphemism for uncertain funding from external federal (e.g., National Science Foundation) or occasionally private sources, short-term postdoctoral positions and fellowships have proliferated. For various reasons, soft money has become a very important part of the funding landscape (see this article in Science in 2000 and this more recent article).

One consequence of this is that most people in astrophysics now need to work at two or three or even more postdoc/fellowship positions before potentially having a shot at a long-term or more secure position. In my case, I’ve already done two postdocs myself, at the Max Planck Institute of Astronomy in Heidelberg and at the University of Arizona, and now I’m a research scientist at UC San Diego and this and my previous position were funded by soft money. The job market for the tenure-track faculty positions has become increasingly worse, and it has worsened with the financial crisis. Note that there are other career options as well, such as those associated with particular projects or programs.

Another consequence is that every couple years people need to spend a considerable amount of time and effort applying for the next round of jobs. In addition, people spend a lot of time writing and submitting research grants—to try to obtain more soft money. As a result, grant acceptance rates are now very low (sometimes less than 10%) and senior positions are very competitive. All of these applications also take time away from research, outreach, and other activities, so one could argue that a lot of scientists’ time is thereby wasted in the current system.

Moreover, this system perpetuates inequalities in science, which I’ll describe more below. It also reinforces a workforce imbalance (as pointed out in this article by Casadevall & Fang) where the senior people are mostly well-known males and the larger number of people at the bottom of the hierarchy are more diverse. In addition, although it can be fun to travel and live in different places, for people in couples or with families, it becomes difficult to sustain an academic career. (See these posts for more on diversity and work-life balance issues.)

The Adjunct Crisis

The job market and economic situation at US colleges and universities has spawned the “adjunct crisis” in teaching and education. Much has been written about this subject—though maybe not enough, as it’s still a major problem. (There’s even a blog called “The Adjunct Crisis.”) The number and fraction of adjunctions continues to grow: the NY Times reported last year that 76% (and rising) of US university faculty are adjunct professors.

The problem is that adjuncts are like second-class faculty. Employers are able to exploit the “reserve army of labor” and create potentially temporary positions, but now adjuncts are relied upon much more heavily than before to serve as the majority of college instructors. According to this opinion piece on Al-Jazeera, most adjuncts teach at multiple universities while still not making enough to stay above the poverty line. Some adjuncts even depend on food stamps to get by. The plight of adjuncts received more media attention when Margaret Mary Vojtko, an adjunct who taught French for 25 years at Duquesne University in Pittsburgh, died broke and nearly homeless. Adjuncts clearly need better working conditions, rights, and a living wage.

Inequalities in Science

As I mentioned above, the current job market situation reinforces and exacerbates inequalities in science. The current issue of Science magazine has a special section on the “science of inequality,” which includes this very relevant article. The author writes that one source of inequality is what Robert Merton called the “Matthew effect,” such that the rich get richer: well-known scientists receive disproportionately greater recognition and rewards than lesser-known scientists for comparable contributions. As a result, a talented few can parlay early successes into resources for future successes, accumulating advantages over time. (If you’re interested, Robert Merton was a sociologist of science whose work is relevant to this post.) From the other side of things, we’re all busy, and it’s easy to hire, cite the work of, award funding to, etc. people who know are successful scientists, even though many lesser known scientists may be able to accomplish the same thing with that grant or position or may have published equally important work; but then more time needs to be spent to research all of the lesser known people, who can publish and still perish.

The author, Yu Xie, also points out that the inequality in academics’ salaries has intensified, some academic labor is being outsourced, and one can be effected down the road by one’s location in global collaborative networks. If one does not obtain a degree at a top-tier university, then this can be detrimental in the future regardless of how impressive one’s work and accomplishments are. We can attempt to get around this last point by spending the time to recognize those who aren’t the most well-known in a field or at the most well-known institutions but who have considerable achievements and produced important work.

“Love What You Do”

Finally, I’ll end by talking about the “Do what you love. Love what you do” (DWYL) nonsense. While this seems like good advice, since it’s great to try to follow your passions if you can, nonetheless it’s both elitist and denigrates work. (I recommend checking out this recent article in Jacobin magazine.) People are encouraged to identify with the work that they love, even if the working conditions and job insecurity shouldn’t be tolerated. The author argues that there are many factors that keep PhDs providing such high-skilled labor for such extremely low wages, including path dependency and the sunk costs of earning a PhD, but one of the strongest is how pervasively the DWYL doctrine is embedded in academia. The DWYL ideology hides the fact that if we acknowledged all of our work as work, we could set appropriate limits for it, demanding fair compensation and humane schedules that allow for family and leisure time. These are things that every worker, including workers in academia, deserve.

From Dark Matter to Galaxies

Since I just got back from the From Dark Matter to Galaxies conference in Xi’an, China, I figured I’d tell you about it. I took this photo in front of our conference venue:
photo 2

Xi’an is an important historical place, since it was one of the ancient capitals of the country (not just the Shaanxi province) and dates back to the 11th century BCE, during the Zhou dynasty. Xi’an is also the home of the terra cotta warriors, horses, and chariots, which (along with a mausoleum) were constructed during the reign of the first emperor, Qin Shi Huang. The terra cotta warriers were first discovered in 1974 by local farmers when they were digging a well, and they are still being painstakingly excavated today.

IMG_1731

Back to the conference. This was the 10th Sino-German Workshop in Galaxy Formation and Cosmology, organized by the Chinese Academy of Sciences and the Max Planck Gesellschaft and especially by my friends and colleagues Kang Xi and Andrea Macciò. This one was a very international conference, with people coming from Japan, Korea, Iran, Mexico, US, UK, Italy, Austria, Australia, and other places.

Now scientific conferences aren’t really political exactly, unlike other things I’ve written about on this blog, though this conference did include debates about the nature of dark matter particles and perspectives on dark energy (which is relevant to this post). I should be clear that dark matter is much better understood and determined by observations though, such as by measurements of galaxy rotation curves, masses of galaxy clusters, gravitational lensing, anisotropies in the cosmic microwave background radiation, etc. (On a historical note, one conference speaker mentioned that the CMB was first discovered fifty years ago, on 20 May 1964, by Penzias and Wilson, who later won the Nobel Prize.) In contrast, the constraints on dark energy (and therefore our understanding of it) are currently rather limited.

the main points

I’ll start with the main points and results people presented at the conference. First, I thought there were some interesting and controversial talks about proposed dark matter (DM) particles and alternate dark energy cosmologies. (The currently favored view or standard “paradigm” is ΛCDM, or cold dark matter with a cosmological constant.) People are considering various cold dark matter particles (WIMPS, axions), warm dark matter (sterile neutrino), and self-interacting dark matter. (Warm dark matter refers to particles with a longer free-streaming length than CDM, which results in the same large-scale structure but in different small-scale behavior such as cored density profiles of dark matter haloes.) The jury is still out, as they say, about which kind of particle makes up the bulk of the dark matter in the universe. There were interesting talks on these subjects by Fabio Fontanot, Veronica Lora, Liang Gao, and others.

Second, people showed impressive results on simulations and observations of our Milky Way (MW) galaxy the “Local Group”, which includes the dwarf galaxy satellites of the MW and the Andromeda (M31) galaxy’s system. Astrophysicists are studying the abundance, mass, alignment of satellite galaxies as well as the structure and stellar populations of the MW. Some of these analyses can even be used to tell us something about dark matter and cosmology, because once we know the MW dark matter halo’s mass, we can predict the number and masses of the satellites based on a CDM or WDM. (Current constraints put the MW halo’s mass at about one to two trillion solar masses.) There were some interesting debates between Carlos Frenk, Aldo Rodriguez-Puebla, and others about this.

The third subject many people discussed involves models, and observations of the large-scale structure of the universe and the formation and evolution of galaxies. There are many statistical methods to probe large-scale structure (LSS), but there is still a relatively wide range of model predictions and observational measurements at high redshift, allowing for different interpretations of galaxy evolution. In addition, simulations are making progress in producing realistic disk and elliptical galaxies, though different types of simulations disagree about the detailed physical processes (such as the treatment of star formation and stellar winds) that are implemented in them.

There were many interesting talks, including reviews by Rashid Sunyaev (famous for the Sunyaev-Zel’dovich effect), Houjun Mo, Joachim Wambsganss, Eva Grebel, Volker Springel, Darren Croton, and others. Mo spoke about impressive work on reconstructing the density field of the local universe, Springel spoke about the Illustris simulation, and Wambsganss gave a nice historical review of studies of gravitational lensing. I won’t give more details about the talks here unless people express interest in learning more about them.

my own work

In my unbiased opinion, one of the best talks was my own, which was titled “Testing Galaxy Formation with Clustering Statistics and ΛCDM Halo Models at 0<z<1.” (My slides are available here, if you’re interested.) I spoke about work-in-progress as well as results in this paper and this one. The former included a model of the observed LSS of galaxies, and you can see a slice from the modeled catalog in this figure:
figure

I also talked about galaxy clustering statistics, which are among the best methods for analyzing LSS and for bridging between the observational surveys of galaxies and numerical simulations of dark matter particles, whose behavior can be predicted based on knowledge of cosmology and gravity. I’m currently applying a particular set of models to measurements of galaxy clustering out to redshift z=1 and beyond, which includes about the last eight billion years of cosmic time. I hope that these new results (which aren’t published yet) will tell us more about how galaxies evolve within the “cosmic web” and about how galaxy growth is related to the assembly of dark matter haloes.

International Collaborations

(I actually wrote this post a week ago while I was in China, but many social media sites are blocked in China. Sites for books, beer, and boardgames weren’t blocked though—so they must be less subversive?)

Since I’m having fun on a trip to Nanjing and Xi’an now, seeing old friends and colleagues and attending a conference (From Dark Matter to Galaxies), I figured I’d write a lighter post about international collaborations. By the way, for you Star Trek fans, this month it’s been twenty years since the end of The Next Generation, which had the ultimate interplanetary collaboration. (And this image is from the “The Chase” episode.)

ST-TNG_The_Chase

In physics and astrophysics, and maybe in other fields as well, scientific collaborations are becoming increasingly larger and international. (The international aspect sometimes poses difficulties for conference calls over many timezones.) These trends are partly due to e-mail, wiki pages, Dropbox, SVN repositories, Github, remote observing, and online data sets (simulations and observations). Also, due to the increasing number of scientists, especially graduate students and postdoctoral researchers, many groups of people work on related subjects and can mutually benefit from collaborating.

On a related note, the number of authors on published papers is increasing (see this paper, for example). Single-author papers are less common than they used to be, and long author lists for large collaborations, such as Planck and the Sloan Digital Sky Survey, are increasingly common. Theory papers still have fewer authors than observational ones, but they too have longer author lists than before. (I’ll probably write more about scientific publishing in more detail in another post.)

Of course, conferences, workshops, collaboration meetings and the like are important for discussing and debating scientific results. They’re also great for learning about and exposing people to new developments, ideas, methods, and perspectives. Sometimes, someone may present a critical result or make a provocative argument that happens to catch on. Furthermore, conferences are helpful for advancing the career of graduate students and young scientists, since they can advertise their own work and meet experts in their field. When looking for their next academic position (such as a postdoctoral one or fellowship), it helps to have personally met potential employers. Working hard and producing research is not enough; everyone needs to do some networking.

Also, note that for international conferences and meetings, English has become the lingua franca, and this language barrier likely puts some non-native English speakers at a disadvantage, unfortunately. I’m not sure how this problem could be solved. I’m multilingual but I only know how to talk about science in English, and I’d have no confidence trying to talk about my research in Farsi or German. We’ve talked about privilege before, and certainly we should consider this a form of privilege as well.

Finally, I’ll make a brief point about the carbon footprint of scientists and the impact of (especially overseas) travel. For astrophysicists, the environmental impact of large telescopes and observatories in Hawaii and Chile, for example, is relatively small; it’s the frequent travel that takes a toll. I enjoy traveling, but we should work more on “sustainability” and reducing our carbon footprint. There are doubts about the effectiveness of carbon-offset programs (see the book Green Gone Wrong), so what needs to be done is to reduce travel. Since conferences and workshops are very important, we should attempt to organize video conferences more often. In order for video conferences and other such organized events to be useful though, I think more technological advances need to be made, and people need to be willing to adapt to them. Another advantage to these is that they’re beneficial for people who have family, children, or other concerns and for people from outside the top-tier institutions who have smaller budgets. In other words, video conferences could potentially help to “level the playing field,” as they say.

How scientists reach a consensus

Following my previous post on paradigm shifts and on how “normal science” occurs, I’d like to continue that with a discussion of scientific consensus. To put this in context, I’m partly motivated by the recent controversy about
Roger Pielke Jr., a professor of environmental studies at the University of Colorado Boulder, who is also currently a science writer for Nate Silver’s FiveThirtyEight website. (The controversy has been covered on Slate, Salon, and Huffington Post.) Silver’s work has been lauded for its data-driven analysis, but Pielke has been accused of misrepresenting data, selectively choosing data, and presenting misleading conclusions about climate change, for example about its effect on disaster occurrences and on the western drought.

This is also troubling in light of a recent article I read by Aklin & Urpelainen (2014), titled “Perceptions of scientific dissent undermine public support for environmental policy.” Based on an analysis of a survey of 1000 broadly selected Americans of age 18-65, they argue that “even small skeptical minorities can have large effects on the American public’s beliefs and preferences regarding environmental regulation.” (Incidentally, a book by Pielke is among their references.) If this is right, then we are left with the question about how to achieve consensus and inform public policy related to important environmental problems. As the authors note, it is not difficult for groups opposed to environmental regulation to confuse the public about the state of the scientific debate. Since it is difficult to win the debate in the media, a more promising strategy would be to increase awareness about the inherent uncertainties in scientific research so that the public does not expect unrealistically high degrees of consensus. (And that’s obviously what I’m trying to do here.)

Already a decade ago, the historian of science Naomi Oreskes (formerly a professor at UC San Diego) in a Science article analyzed nearly 1000 article abstracts about climate change over the previous decade and found that none disagreed explicitly with the notion of anthropogenic global warming–in other words, a consensus appears to have been reached. Not surprisingly, Pielke criticized this article a few months later. In her rebuttal, Oreskes made the point that, “Proxy debates about scientific uncertainty are a distraction from the real issue, which is how best to respond to the range of likely outcomes of global warming and how to maximize our ability to learn about the world we live in so as to be able to respond efficaciously. Denying science advances neither of those goals.”

The short answer to the question, “How do scientists reach a consensus?” is “They don’t.” Once a scientific field has moved beyond a period of transition, the overwhelming majority of scientists adopt at least the central tenets of a paradigm. But even then, there likely will be a few holdouts. The holdouts rarely turn out to be right, but their presence is useful because a healthy and democratic debate about the facts and their interpretation clarifies which aspects of the dominant paradigm are in need of further investigation. The stakes are higher, however, when scientific debate involves contentious issues related to public policy. In those situations, once a scientific consensus appears to be reached and once scientists are sufficiently certain about a particular issue, we want to be able to respond effectively in the short or long term with local, national, or international policies or regulations or moratoria, depending on what is called for. In the meantime, the debates can continue and the policies can be updated and improved.

Of course, it is not always straightforward to determine when a scientific consensus has been reached or when the scientific community is sufficiently certain about an issue. A relevant article here is that of Shwed & Bearman (2010), which was titled “The Temporal Structure of Scientific Consensus Formation.” They refer to “black boxing,” in which scientific consensus allows scientists to state something like “smoking causes cancer” without having to defend it, because it has become accepted by the consensus based on a body of research. Based on an analysis of citation networks, they show that areas considered by expert studies to have little rivalry have “flat” levels of modularity, while more controversial ones show much more modularity. “If consensus was obtained with fragile evidence, it will likely dissolve with growing interest, which is what happened at the onset of gravitational waves research.” But consensus about climate change was reached in the 1990s. Climate change skeptics (a label which may or may not apply to Pielke) and deniers can cultivate doubt in the short run, but they’ll likely find themselves ignored in the long run.

Finally, I want to make a more general point. I often talk about how science is messy and nonlinear, and that scientists are human beings with their own interests and who sometimes make mistakes. As stated by Steven Shapin (also formerly a professor at UC San Diego) in The Scientific Revolution, any account “that seeks to portray science as the contingent, diverse, and at times deeply problematic product of interested, morally concerned, historically situated people is likely to be read as criticism of science…Something is being criticized here: it is not science but some pervasive stories we tend to be told about science” (italics in original). Sometimes scientific debates aren’t 100% about logic and data and it’s never really possible to be 0% biased. But the scientific method is the most reliable and respected system we’ve got. (A few random people might disagree with that, but I think they’re wrong.)

Big Science and Big Data

I’d like to introduce the topic of “big science.” This is especially important as appropriations committees in Congress debate budgets for NASA and NSF in the US (see my previous post) and related debates occurred a couple month’s ago in Europe over the budget of the European Space Agency (ESA).

“Big science” usually refers to large international collaborations on projects with big budgets and long time spans. According to Harry Collins in Gravity’s Shadow (2004),

small science is usually a private activity that can be rewarding to the scientists even when it does not bring immediate success. In contrast, big-spending science is usually a public activity for which orderly and timely success is the priority for the many parties involved and watching.

He goes on to point out that in a project like the Laser Interferometer Gravitational-Wave Observatory (LIGO), it’s possible to change from small science to big but it means a relative loss of autonomy and status for most of the scientists who live through the transition. Kevles & Hood (1992) distinguish between “‘centralized’ big science, such as the Manhattan Project and the Apollo program; ‘federal’ big science, which collects and organizes data from dispersed sites; and ‘mixed’ big science, which offers a big, centrally organized facility for the use of dispersed teams.”

In addition to LIGO, there are many other big science projects, such the Large Hadron Collider (LHC, which discovered the Higgs boson), the International Thermonuclear Experimental Reactor (ITER), and in astronomy and astrophysics, the James Webb Space Telescope (JWST, the successor to Hubble), the Large Synoptic Survey Telescope (LSST, pictured below), and the Wide-Field InfraRed Survey Telescope (WFIRST), for example.

Dome_at_Night-half

Note that some big science projects are primarily supported by government funding while others receive significant funding from industry or philanthropists. LSST and LIGO are supported by the NSF, JWST and WFIRST are supported by NASA, and LHC is supported by CERN, but all of these are international. In the case of the fusion reactor ITER (see diagram below), on which there was a recent detailed New Yorker article, it has experienced many delays and has gone over its many-billion-dollar budget, and it has had management problems as well. While budget and scheduling problems are common for big science projects, ITER is in a situation in which it needs produce results in the near future and avoid additional delays. (The US is committing about 9% to ITER’s total cost, but its current contribution is lower than last year’s and its future contributions may be reevaluated at later stages of the project.)

in-cryostat overview 130116

As scientists, we try to balance small-, mid-, and large-size projects. The large ones are larger than before, require decades of planning and large budgets, and often consist of collaborations with hundreds of people from many different countries. It’s important to be aware that relatively small- and mid-scale projects (such as TESS and IBEX in astronomy) are very important too for research, innovation, education, and outreach, and as they usually involve fewer risks, they can provide at least as much “bang for the buck” (in the parlance of our times).

In the context of “big science” projects these days, the concepts of “big data” and “data-driven science” are certainly relevant. Many people argue that we are now in an era of big data, in which we’re obtaining collections of datasets so large and complex that it becomes difficult to process them using on-hand database management tools or traditional data processing applications. Since the volume, velocity, and variety of data are rapidly increasing, it is increasingly important to develop and apply appropriate data mining techniques, machine learning, scalable algorithms, analytics, and other kinds of statistical tools, which often require more computational power than traditional data analyses. (For better or for worse, “big data” is also an important concept in the National Security Agency and related organizations, in government-funded research, and in commercial analyses of consumer behavior.)

In astronomy, this is relevant to LSST and other projects mentioned above. When LSST begins collecting data, each night for ten years it will obtain roughly the equivalent amount of data that was obtained by the entire Sloan Digital Sky Survey, which was until recently the biggest survey of its kind, and it will obtain about 800 measurements each for about 20 billion sources. We will need new ways to store and analyze these vast datasets. This also highlights the importance of “astrostatistics” (including my own) and of “citizen science” (which we introduced in a previous post) such as the Galaxy Zoo project. IT companies are becoming increasingly involved in citizen science as well, and the practice of citizen science itself is evolving with new technologies, datasets, and organizations.

I’ll end by making a point that was argued in a recent article in Science magazine: we should avoid “big data hubris,” the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis.

My Experience with the Congressional Visit Day

[A previous version of this first appeared as a Guest Post on the AAS Policy Blog.]

Last week, I participated in the Congressional Visit Day (CVD) with the American Astronomical Society (AAS). I was just one member in a group of eighteen AAS members—a diverse group from around the country involved in many different subspecialties of astronomical research, as well as various teaching and outreach programs. Below, is a nice photo of us is (and I’m the guy wearing a hat). Our AAS delegation was part of a larger group of scientists, engineers, and business leaders involved in a few dozen organizations participating in the CVD, which was sponsored by the Science-Engineering-Technology Work Group. Go here for a further description of our program.

aas_cvd_2014

As scientists and members of the AAS, we had a few primary goals. We argued first and foremost for the importance of investing in scientific research (as well as education and outreach) through funding to the National Science Foundation (NSF), NASA, and science in particular departments (especially the Depts. of Energy and Defense). If you’re interested, you can see our handout here. We also encouraged our Representatives to sign two “Dear Colleague” letters that are currently passing through the House: the first letter is by Rep. G. K. Butterfield (D-NC) and is asking for a 3% increase to NSF’s FY 2015 budget to $7.5 billion, and the second letter is by Rep. Rush Holt (D-NJ), Rep. Randy Hultgren (R-IL), and Rep. Bill Foster (D-IL) and is asking the appropriators to “make strong and sustained funding for the DOE Office of Science one of your highest priorities in fiscal year 2015.”

We also told our Congress members about our personal experiences. In my case, I have been funded by NASA grants in the past and am currently funded by a NSF grant. I am applying for additional research grants, but it’s not easy when there is enough funding available only for a small fraction of submitted grant proposals. In the past, I have also benefited from projects and telescopes that were made possible by NASA and the NSF, and I plan to become involved in new telescopes and missions such as the Large Synoptic Survey Telescope (LSST), the Wide-Field InfraRed Survey Telescope (WFIRST), and possibly the James Webb Space Telescope (JWST, the successor to the Hubble Space Telescope). Also, if a NSF grant I’ve submitted is successful (fingers crossed!), I will be able to participate more actively in public outreach programs especially in the San Diego area in addition to continuing my research.

Not only did we explain the importance of stable funding for basic research, we also talked with our legislators about how astronomy is a “gateway science” that draws people in and inspires them to learn more, become more involved, and even potentially become scientists themselves.

We talked about the importance of improving science and math literacy, which also improves US competitiveness with respect to other countries, and about how investment in science spurs innovation in industry and leads to new and sometimes unexpected developments in computing, robotics, optics, imaging, radar, you name it. Since “all politics is local,” as they say, we also emphasized that these investments in scientific research are important for strong local, as well as national, economies. As we were visiting shortly after the introduction for the President’s Budget Request (PBR) for FY 2015, we also expressed our concern that the proposed budget reduces funding for NASA’s education and outreach activities within the Science Mission Directorate by two-thirds, and would require mothballing the Stratospheric Observatory For Infrared Astronomy (SOFIA) outside of the well-established senior review process.

My Congress members are Senators Barbara Boxer and Dianne Feinstein, whose staff we met, and Representative Susan Davis (CA-53), with whom we met personally (along with a member of her staff). We had a quick photo-op too, right before she had to get back to the House chamber for a vote. I was in a group with two other astronomers who were from Oklahoma and Illinois, and we met with their respective Congress members as well. Our larger group was split into teams of three to four for the days visits, and each met with the representatives and senators of all team members.

photo 4

Senators and Representatives serve on different committees and subcommittees, each with a specific jurisdiction over parts of the federal government. For example, Sen. Boxer is on the Science & Space Subcommittee of Senate’s Commerce Committee and is the chair of the Committee on Environment & Public Works. Sen. Feinstein is chair of the Senate Appropriations Committee’s Subcommittee on Energy & Water, which has jurisdiction over the Department of Energy (among many other things). The appropriations committee is responsible for writing legislation that grants federal agencies the ability to spend money, that is, they appropriate the budgets for the agencies under their jurisdiction. Rep. Davis is a member of the House Education & Workforce Committee and has done a lot of work on educational reform, promoting youth mentoring, and civic education.

I think that we received a largely positive responsive from our congressional representatives. My three Congress members were very supportive and in agreement with our message. Some of the other members we met with, while generally positive about our message, left me with the impression that they approved of our “hard sciences” but didn’t want as much funding going to social sciences, climate science, and other particular fields. It seems to me that we must get ourselves out of this highly constrained budget environment, in which discretionary programs like those funding the sciences are capped each year; we need to either find additional sources of revenue (e.g., reducing tax breaks) or make other changes to current law.

In my previous blog post, I talked about the proposed budget and the negotiations taking place in Congressional committees. We also need to consider the current political situation with the upcoming mid-term elections. Once a budget (which may be significantly different than the PBR) is passed by the House and Senate Appropriations Committees, it will be considered by the House and Senate, which are currently controlled by Republicans and Democrats (who have 53 seats plus 2 independents who caucus with them). However, it appears possible that Republicans may retake the Senate in the 114th Congress, and in that case their leadership may resist even small additions to the current budget request and may attempt to simply pass a “continuing resolution” instead.

On the same day as our CVD (26th March), Office of Science and Technology Policy Director John Holdren appeared before the House Committee on Science, Space, and Technology, where there were considerable disagreements among the committee members about STEM education, SOFIA, and other issues. (Note that the committee is particularly polarized and has been criticized for its excessive partisanship and industry influence.) Fortunately, on the following day, a hearing before House appropriators on the NSF budget request fared better. This is encouraging, but in any case it will be a difficult struggle to produce a good budget (that is, good for science) within a short time-scale.

Paradigm Shifts?

In addition to physics and astronomy, I used to study philosophy of science and sociology. In my opinion, many scientists could learn a few things from sociologists and philosophers of science, to help them to better understand and consider how scientific processes work, what influences them and potentially biases scientific results, and how science advances through their and others’ work. In addition, I think that people who aren’t professional scientists (who we often simply call “the public”) could better understand what we are learning and gaining from science and how scientific results are obtained. I’ll just write a few ideas here and we can discuss these issues further later, but my main point is this: science is an excellent tool that sometimes produces important results and helps us learn about the universe, our planet, and ourselves, but it can be a messy and nonlinear process, and scientists are human–they sometimes make mistakes and may be stubborn about abandoning a falsified theory or interpretation. The cleanly and clearly described scientific results in textbooks and newspaper articles are misleading in a way, as they sometimes make us forget the long, arduous, and contentious process through which those results were achieved. To quote from Carl Sagan (in Cosmos), who inspired the subtitle of this blog (the “pale blue dot” reference),

[Science] is not perfect. It can be misused. It is only a tool. But it is by far the best tool we have, self-correcting, ongoing, applicable to everything. It has two rules. First: there are no sacred truths; all assumptions must be critically examined; arguments from authority are worthless. Second: whatever is inconsistent with the facts must be discarded or revised.

As you may know, the title of this post refers to Thomas Kuhn (in his book, The Structure of Scientific Revolutions). “Normal science” (the way science is usually done) proceeds gradually and is based on paradigms, which are collections of diverse elements that tell scientists what experiments to perform, which observations to make, how to modify their theories, how to make choices between competing theories and hypotheses, etc. We need a paradigm to demarcate what is science and to distinguish it from pseudo-science. Scientific revolutions are paradigm shifts, which are relatively sudden and unstructured events, and which often occur because of a crisis brought about by the accumulation of anomalies under the prevailing paradigm. Moreover, they usually cannot be decided by rational debate; paradigm acceptance via revolution is essentially a sociological phenomenon and is a matter of persuasion and conversion (according to Kuhn). In any case, it’s true that some scientific debates, especially involving rival paradigms, are less than civil and rational and can look something like this:
calvin_arguing

I’d like to make the point that, at conferences and in grant proposals, scientists (including me) pretend that we are developing research that is not only cutting edge but is also groundbreaking and Earth-shattering; some go so far as to claim that they are producing revolutionary (or paradigm-shifting) research. Nonetheless, scientific revolutions are actually extremely rare. Science usually advances at a very gradual pace and with many ups and downs. (There are other reasons to act like our science is revolutionary, however, since this helps to gain media attention and perform outreach in the public, and it helps policy-makers to justify investments in basic research in science.) When a scientist or group of scientists does obtain a critically important result, it is usually the case that others have already produced similar results, though perhaps with less precision. Credit often goes to a single person who packaged and advertised their results well. For example, many scientists are behind the “Higgs boson” discovery, and though American scientists received the Nobel Prize for detecting anisotropies in the cosmic microwave background with the COBE satellite, Soviets actually made an earlier detection with the RELIKT-1 experiment.

einstein-bohr

Let’s briefly focus on the example of quantum mechanics, in which there were intense debates intense debates in the 1920s about (what appeared to be) “observationally equivalent” interpretations, which in a nutshell were either probabilistic or deterministic and realist ones. My favorite professor at Notre Dame, James T. Cushing, wrote a provocative book on the subject with the subtitle, “Historical Contingency and the Copenhagen Hegemony“. The debates occurred between Neils Bohr’s camp (with Heisenberg, Pauli, and others, who were primarily based in Copenhagen and Göttingen) and Albert Einstein’s camp (with Schrödinger and de Broglie). Bohr’s younger followers were trying to make bold claims about QM and to make names for themselves, and one could argue that they misconstrued Einstein’s views. Einstein had essentially lost by the 1930s, in which the nail in the coffin was von Neumann’s so-called impossibility proof of “hidden variables” theories–a proof that was shown to be false thirty years later. In any case, Cushing argues that in decisions about accepting or dismissing scientific theories, sometimes social conditions or historical coincidences can play a role. Mara Beller also wrote an interesting book about this (Quantum Dialogue: The Making of a Revolution), and she finds that in order to understand the consolidation of the Copenhagen interpretation, we need to account for the dynamics of the Bohr et al. vs. Einstein et al. struggle. (In addition to Cushing and Beller, another book by Arthur Fine, called The Shaky Game, is also a useful reference.) I should also point out that Bohr used the rhetoric of “inevitability” which implied that there was no plausible alternative to the Copenhagen paradigm. If you can convince people that your view is already being adopted by the establishment, then the battle has already been won.

More recently, we have had other scientific debates about rival paradigms, such as in astrophysics, the existence of dark matter (DM) versus modified Newtonian dynamics (MOND); DM is more widely accepted, though its nature–whether it is “cold” or “warm” and to what extent it is self-interacting–is still up for debate. Debates in biology, medicine, and economics, are often even more contentious, partly because they have policy implications and can conflict with religious views.

Other relevant issues include the “theory-ladenness of observation”, the argument that everything one observes is interpreted through a prior understanding (and assumption) of other theories and concepts, and the “underdetermination of theory by data.” The concept of underdetermination dates back to Pierre Duhem and W. V. Quine, and it refers to the argument that given a body of evidence, more than one theory may be consistent with it. A corollary is that when a theory is confronted with recalcitrant evidence, the theory is not falsified, but instead, it can be reconciled with the evidance by making suitable adjustments to its hypotheses and assumptions. It is nonetheless the case that some theories are clearly better than others. According to Larry Laudan, we should not overemphasize the role of sociological factors over logic and the scientific method.

In any case, all of this has practical implications for scientists as well as for science journalists and for people who popularize science. We should be careful to be aware of, examine, and test our implicit assumptions; we should examine and quantify all of our systematic uncertainties; and we should allow for plenty of investigation of alternative explanations and theories. In observations, we also should be careful about selection effects, incompleteness, and biases. Finally, we should remember that scientists are human and sometimes make mistakes. Scientists are trying to explore and gain knowledge about what’s really happening in the universe, but sometimes other interests (funding, employment, reputation, personalities, conflicts of interest, etc.) play important roles. We must watch out for herding effects and confirmation bias, where we converge and end up agreeing on the incorrect answer. (Historical examples include the optical or electromagnetic ether; the crystalline spheres of medieval astronomy; the humoral theory of medicine; ‘catastrophist’ geology; etc.) Paradigm shifts are rare, but when we do make such a shift, let’s be sure that what we’re transitioning to is actually our currently best paradigm.

[For more on philosophy of science, this anthology is a useful reference, and in particular, I recommend reading work by Imre Lakatos, Paul Feyerabend, Helen Longino, Nancy Cartwright, Bas van Fraassen, Mary Hesse, and David Bloor, who I didn’t have the space to write about here. In addition, others (Ian Hacking, Allan Franklin, Andrew Pickering, Peter Galison) have written about these issues in scientific observations and experimentation. For more on the sociology of science, this webpage seems to contain useful references.]

Publish or Perish?

I’d like to add a short post about writing and publishing papers. The phrase “publish or perish” is commonly heard because there is some truth to it. According to Wikipedia, the phrase dates back to the 1930s and 1940s.

There are some advantages to having pressure to publish. By encouraging scientists to write and publish their research, they and their work become more widely known, including among their peers. As scientists, we enjoy and are excited about working on research and publishing the interesting and new results, and putting the papers out helps to advance the field.

When considering scientists for academic posts (or for research grants), it can be a difficult and time-consuming process. That’s unavoidable, especially when numerous people will apply for small numbers of jobs and grants. One clearly quantifiable metric by which academics are judged is the number of papers they’ve published (and another is the size of grants they draw in). As pointed out by this recent New Yorker blog, both the amount and style of writing are related to the constant pressure to publish and the tough academic job market. In addition, the job market now appears two-tiered, with part-time and adjunct faculty working long hours for lower pay (see these NY Times and Salon articles). I plan to talk about this issue further in another post.

There are some major disadvantages to the publish-or-perish culture. One problem is that it doesn’t leave time for long-term or risky research on controversial topics. It also doesn’t allow for exploring new ideas, issues, or collaborations that might not pan out and result in something publishable. Nonetheless, these things are good for scientists and they are good for science, and when they are successful, there are huge rewards or discoveries. For example, Peter Higgs, the Nobel prize-winning physicist who was one of the discoverers of the Higgs boson, says that no university would employ him in today’s academic system because he would not be considered “productive” enough. The publish-or-perish culture is not just dominant in the physical sciences, but also in the social sciences, humanities, and law.

A related issue is that of “luxury” journals like Nature, Science, and Cell. According to the Guardian, Randy Schekman, the Nobel prize-winning biologist is now no longer publishing in these journals because they distort the scientific process. He writes: “A paper can become highly cited because it is good science – or because it is eye-catching, provocative, or wrong.” These journals have become brand names, and they prioritize publishing provocative results–perhaps before they’ve been sufficiently tested and vetted by editors, peer reviewers, or the authors themselves. The result is that the journals have a reputation for publishing results that are often wrong. Scientists know this, yet publishing in these journals still carries prestige.

In addition to publishing papers and books, scientists work on other important things that should be valued too. Key among them is teaching, of course, as well as participating in outreach programs, mentoring students, communicating with journalists and policy-makers, and other academic service to the community. These activities are not as easily quantifiable as scientists’ publications, but we should make the effort to recognize this work.

More from the AAAS meeting

The second half of the AAAS meeting in Chicago was interesting too. (I wrote about the first half in my previous post.)

alda-160x220

Probably the best and most popular event at the meeting was Alan Alda’s presentation. You’ll know Alan Alda as the actor from M*A*S*H (and recently, 30 Rock), but he’s also a visiting professor at the Alan Alda Center for Communicating Science at Stony Brook University. He gave an inspiring talk to a few thousand people about how to communicate science clearly and effectively in a way that people can understand. He talked about how one should avoid or be careful about using jargon. Interaction with the audience is important, and one can do that by telling a personalized story (with a hero, goal, and an obstacle, which develops an emotional connection), or by engaging with the audience so that they become participants. It’s also important to communicate what is most interesting or exciting or curiosity-piquing about the science, but in the end, the words you use don’t matter as much as your body language and tone of voice. It’s also good to develop improvisation skills, so when a particular explanation or analogy doesn’t appear to work well with the audience, you can adapt to the situation. He referred to the “curse of knowledge”, such that as scientists we forget what it’s like not to be experts in our particular field of research. That can be an obstacle when interacting with most segments of the public, Congress members and other politicians (most of whom aren’t scientists or haven’t the time to become familiar with the science), and even with scientists in other fields. Most of all, one needs to be clear, engaged, and connected with one’s audience. Finally, Alda told us about the “flame challenge
–challenging scientists to explain flames and other concepts for 11-year olds to understand. (The kids are also the judges of the competition.) If the video of Alda’s talk becomes available online, I’ll link to it here for you.

I attended an interesting session on climate change and whether/how it’s possible to reduce 80% of greenhouse gas emissions from energy by 2050. As pointed out by the chair, Jane Long (who is one of the authors of this report), our energy needs will likely double or even triple by then, while we must be simultaneously reducing carbon emissions. Peter Loftus discussed this issue as well, and showed the primary energy demand as well as energy intensity (energy used per unit GDP) have been rapidly increasing over the past twenty years, partly due to China. But to obtain substantial carbon reductions, the intensity needs to drop below what we’ve had for the past 40 years! We need to massively add to power generation capacity (10 times more rapidly than our previous rates), and it might not be feasible to exclude both nuclear and “carbon capture” in the process. Karen Palmer gave an interesting talk about the importance of energy efficiency as part of the solution, but she says that one problem is that it’s still hard to evaluate which policies best promote energy efficiency as well as ultimately energy savings and carbon emission reductions. Richard Lester made strong arguments about the need for nuclear power, since renewables might not be up for the task of meeting rising energy demands in the near future. This was disputed by Mark Jacobson, who pointed out that nuclear power has 9-25 times more pollution per kW-hour than wind (due to mining and refining) and it takes longer to construct a plant than the 2-5 years it takes to build wind or solar farms. Jacobson also discussed state-by-state plans: California benefits from many solar devices, for example, while some places in the northeast could use offshore wind farms. In addition, such offshore arrays could withstand and dissipate hurricanes (depending on their strength), and WWS (wind, water, solar) could generate about 1.5 million new jobs in the U.S. in construction alone. Different countries have very different economic situations and carbon footprints, so different solutions may be needed.
CO2em_percapita

I caught part of a session on “citizen science” (see my previous post). Chris Lintott spoke about the history of citizen science and about how the internet has allowed for unprecedented growth and breadth of projects, including the numerous Zooniverse projects. Caren Cooper discussed social benefits of citizen science, and Carsten Østerlund discussed what motivates the citizen scientists themselves and how they learn as they participate. Lastly, Stuart Lynn spoke about how the next generation of citizen science systems can be developed, so that they can accommodate larger communities and larger amounts of data and so that people can classify billions of galaxies with the upcoming Large Synoptic Survey Telescope, for example.

Finally, there was another interesting session on how scientists can work with Congress and on the challenges they face, but more on that later…

Reporting from the American Association for the Advancement of Science (AAAS) meeting

I’d like to tell you about the AAAS meeting I’m attending. (Look here for the program.) It’s in Chicago, which is definitely much colder than southern California! I know it might sounds strange, but it’s nice to experience a real winter again.

There were some astrophysics sessions (such as on galaxy evolution in the early universe and dark matter particles) but that wasn’t my focus here. I took some brief notes, and this is based on them…

There were a few sessions about science communication, outreach, and media. These are very important things: for example, according to Rabiah Mayas, the best indicator of whether people participate in science or become scientists as adults is the extent to which they engaged in science-related activities outside of school as kids. One person discussed the importance of fact-checking for producing high-quality and robust science writing, but it takes time; one should note that peer-review in scientific research is supposed to perform a similar purpose, though it can be time-consuming as well. In any case, many people agreed that scientists and journalists need to interact better and more frequently. (As a side note, I heard two high-profile science journalists mispronounce “arXiv”, which is pronounced exactly like “archive”.) In addition, it’s worth noting that smaller regional newspapers often don’t have dedicated science desks, though this could provide opportunities for young writers to contribute. There was also an excellent talk by Danielle Lee about “Raising STEM Awareness Among Under-Served and Under-Represented Audiences,” who talked about ways to take advantage of social media.

There were interesting presentations about scientists’ role in policy-making, but I’ll get back to that later. Someone made an important point that scientists should be extremely clear about when they are just trying to provide information versus when they are presenting (or advocating) policy options. I should be clearer about that myself.

I also saw interesting talks by people about public opinion surveys in the U.S. and internationally of knowledge and opinions of science and technology. According to these polls, although some Americans are worried about global warming/climate change, people are more worried about toxic waste, water and air pollution. According to Lydia Saad (of Gallup), 58% of Americans worry a “great deal” or “fair amount” about global warming, 57% think human activities are the main cause, 56% think it’s possible to take action to slow its effects, while only 34% think it will affect them or their way of life. In addition, she and Cary Funk (of Pew) found huge partisan gaps between self-identified Democrats, Independents, and Republicans. As one person pointed out, climate change is not just a science issue but has become a political one. Americans in polls had pretty high opinions of scientists, engineers, and medical doctors, but people had the best views of those in the military. There is a wide range of knowledge of science, especially when it comes to issues such as evolution. (Note that fewer Republicans believe in evolution by natural processes, due to a drop in those who are not evangelicals, who already had a low fraction.) Also note that the numbers depend on how poll questions are asked: for example, ~40% agree to, “The universe began with a huge explosion”, and when you add “according to astronomers”, then the proportion jumps up to 60%. (If you’re curious, this image basically describes astronomers’ current view of the Big Bang.)

bigbang

There was an interesting session dedicated to climate change science, which included scientists that contributed to the IPCC’s recent 5th Assessment Report (which we talked about in an earlier blog). Note the language they’re required to use to quantify their un/certainty: “virtually certain” means 99% certain, and then there’s “very likely” (90%), “likely” (67%), and “more likely than not” (>50%). Michael Wehner discussed applications of “extreme value statistics” (which are sometimes used analyze extremely luminous galaxies or large structures in astronomy: see this and this) on extreme temperatures. Extremely cold days will be less cold, while extremely hot days will be more common and hotter. For particular extreme weather events, one can’t say whether they’re due to climate change, but one can ask “How has the risk of this event changed because of climate change?” or “How did climate change affect the magnitude of this event?” It seems very likely that the there will be heavier heavy rainy days, longer dry seasons, and more consecutive dry days between precipitation events. There will be more droughts in the west (west of the Rockies) and southeast, and more floods in the midwest and northeast.

The plenary speaker today was Steven Chu, former Secretary of Energy until last year, who gave an excellent talk. He compared convincing people about climate change to earlier campaigns to convince people about the dangers of tobacco use and its connection to lung cancer; both issues have had industry-promoted disinformation as well. On rising temperatures with climate change, he channeled Yogi Berra when he said, “If we don’t change direction, we’ll end up where we’re heading.” He talked a little about the role of natural gas (see also these NYT and Science blogs), and he discussed carbon capture, utilization, and sequestration (CCUS). Finally, he talked about how one might determine an appropriate price of carbon. He advocated a revenue-neutral tax, starting at $10/ton and over ~12 years raising it to $50/ton, and then giving the money raised from this directly back to the public. He also talked about wind turbines, which are now more reliable, efficient, and bigger, and he predicted a 20-30% decline in price in the next 10-15 years. The cost of solar photovoltaic (PV) modules is also dropping, but installation costs and licensing fees (“soft costs”) should be reduced. I definitely had the impression that, now that Chu is no longer Energy Secretary, he could be more frank than before about his views on contentious issues.