Climate Change is an Environmental Justice issue

In a previous blog post, I introduced the concept of environmental justice (EJ), which refers to the fair treatment of people regardless of race or class with respect to the development, implementation, and enforcement of environmental laws, regulations, and policies. I’ve also previously written about climate change here and about some efforts to address it here. Now my point here that climate change is an EJ issue, especially because anthropogenic greenhouse gas emissions (GHGs) have been primarily produced by people in wealthier countries, while people in poorer countries and regions will likely bear the brunt of the effects of climate change, including rising sea levels, drought, and access to food staples.

The new report from the Intergovernmental Panel on Climate Change (IPCC) was just released a week ago, soon before Earth Day. (You can read news coverage of the report in the Guardian, NY Times, and Atlantic.) The IPCC report was produced by 1,250 international experts and approved by 194 governments, and it is the last of three reports to assess climate research conducted since 2007. The authors argue that only an intensive push in the immediate future can limit climate change to less than catastrophic levels, but lowering costs of alternative energies have made transitioning on a mass scale practical and affordable. Avoiding (the worst of) climate change will be less costly than attempting to adapt to it later with unpredictable geoengineering technologies. The report also discusses “co-benefits“: for example, efforts to reducing air pollution (including GHGs) would improve public health and save millions of lives, balancing the cost of reducing the emissions. The report states that putting a price on GHG emissions, such as through carbon taxes or emission permits (which I’ll write about in a later post), would help to redirect investment toward more climate-friendly technologies and away from fossil fuels.

It’s also interesting to see what was not included in the IPCC report. For example, rich countries (including the US) pushed to remove a proposed section that called for hundreds of billions of dollars of aid per year to be paid to developing countries. The report does refer to “issues of equity, justice, and fairness [that] arise with respect to mitigation and adaptation,” but these are issues that should be further discussed and addressed. For example, we are already seeing extreme climate events, including heat waves, floods, wildfires, and droughts, and poor countries and small island nations are particularly vulnerable to storm surges, coastal flooding, and rising sea levels.

In order to mitigate climate change, the report views favorably the cutting energy waste and improving efficiency and the shift toward renewable energies, especially the zero-emission sources like wind and solar, whose costs are dropping and becoming competitive. Wealthier countries can lead these efforts, and they could fund low-carbon growth in poorer countries, which are unfortunately expanding the use of coal-fired power plants. Archbishop Desmond Tutu has even advocated for an anti-apartheid style campaign against ­fossil fuel companies to respond to the “injustice of climate change.” On that note, I’ve noticed that the term “climate justice” has become increasingly common.

Many vulnerabilities to climate change are visible in the US as well (see this UCS blog), and much more can be done to work toward climate change mitigation and adaptation. In addition, unfortunately, climate change has not yet been connected to EJ in US policy, in spite of the Executive Order signed by Pres. Bill Clinton twenty years ago, which instructed all federal agencies to consider impacts on people of color, the elderly, and those of low-income when crafting new policies and rules. (See this post by post by Robert Bullard, one of the leaders of the EJ movement.) The Environmental Protection Agency’s new Plan EJ 2014 briefly mentions climate change, and at least this is a start.

In order to mobilize people, governments, and institutions to active address climate change, we should discuss how climate change issues are framed. A week ago, I attended an interesting political science talk by Sarah Anderson, professor of environmental politics at UC Santa Barbara. (By the way, I have to admit that the political scientists at UCSD have more comfy chairs than us astrophysicists. We’ll have to work on that!) She mentioned the “moral foundations theory” (proposed by Jonathan Haidt; and Lakoff & Wehling): political liberals construct their moral systems primarily upon two psychological foundations (fairness/justice and harm/care), while conservatives’ moral systems are also based on others (including ingroup/loyalty, authority/respect, purity/degradation). So if the goal is to address climate change–which may be one of the greatest environmental and socioeconomic problems of our generation–then we should try to appeal to everyone, not just those identified as liberals or leftists. To do so, maybe we need to use additional frames, such as by emphasizing the importance of avoiding environmental degradation and the potential economic benefits of mitigating climate change.

Finally, political scientists often focus on the workings of the state and on policies and regulations, but there are many important actors outside the state, especially among social movements and civil society. Fortunately, organized opposition to the Keystone pipelines and fracking, for example, have made these climate change issues more pressing for policy-makers.
Harvard poli sci professor Theda Skocpol (quoted in a New Yorker article) criticizes the tactic of mobilizing support exclusively through the media; instead, she argues, “reformers will have to build organizational networks across the country, and they will need to orchestrate sustained political efforts that stretch far beyond friendly Congressional offices, comfy board rooms, and posh retreats.” Perhaps what the environmental movement need are more “federated structures,” which have national leaders to interact with political officials in the White House and Congress as well as local chapters which regularly meet (and organize rallies or teach-ins) to develop their larger goals.

How scientists reach a consensus

Following my previous post on paradigm shifts and on how “normal science” occurs, I’d like to continue that with a discussion of scientific consensus. To put this in context, I’m partly motivated by the recent controversy about
Roger Pielke Jr., a professor of environmental studies at the University of Colorado Boulder, who is also currently a science writer for Nate Silver’s FiveThirtyEight website. (The controversy has been covered on Slate, Salon, and Huffington Post.) Silver’s work has been lauded for its data-driven analysis, but Pielke has been accused of misrepresenting data, selectively choosing data, and presenting misleading conclusions about climate change, for example about its effect on disaster occurrences and on the western drought.

This is also troubling in light of a recent article I read by Aklin & Urpelainen (2014), titled “Perceptions of scientific dissent undermine public support for environmental policy.” Based on an analysis of a survey of 1000 broadly selected Americans of age 18-65, they argue that “even small skeptical minorities can have large effects on the American public’s beliefs and preferences regarding environmental regulation.” (Incidentally, a book by Pielke is among their references.) If this is right, then we are left with the question about how to achieve consensus and inform public policy related to important environmental problems. As the authors note, it is not difficult for groups opposed to environmental regulation to confuse the public about the state of the scientific debate. Since it is difficult to win the debate in the media, a more promising strategy would be to increase awareness about the inherent uncertainties in scientific research so that the public does not expect unrealistically high degrees of consensus. (And that’s obviously what I’m trying to do here.)

Already a decade ago, the historian of science Naomi Oreskes (formerly a professor at UC San Diego) in a Science article analyzed nearly 1000 article abstracts about climate change over the previous decade and found that none disagreed explicitly with the notion of anthropogenic global warming–in other words, a consensus appears to have been reached. Not surprisingly, Pielke criticized this article a few months later. In her rebuttal, Oreskes made the point that, “Proxy debates about scientific uncertainty are a distraction from the real issue, which is how best to respond to the range of likely outcomes of global warming and how to maximize our ability to learn about the world we live in so as to be able to respond efficaciously. Denying science advances neither of those goals.”

The short answer to the question, “How do scientists reach a consensus?” is “They don’t.” Once a scientific field has moved beyond a period of transition, the overwhelming majority of scientists adopt at least the central tenets of a paradigm. But even then, there likely will be a few holdouts. The holdouts rarely turn out to be right, but their presence is useful because a healthy and democratic debate about the facts and their interpretation clarifies which aspects of the dominant paradigm are in need of further investigation. The stakes are higher, however, when scientific debate involves contentious issues related to public policy. In those situations, once a scientific consensus appears to be reached and once scientists are sufficiently certain about a particular issue, we want to be able to respond effectively in the short or long term with local, national, or international policies or regulations or moratoria, depending on what is called for. In the meantime, the debates can continue and the policies can be updated and improved.

Of course, it is not always straightforward to determine when a scientific consensus has been reached or when the scientific community is sufficiently certain about an issue. A relevant article here is that of Shwed & Bearman (2010), which was titled “The Temporal Structure of Scientific Consensus Formation.” They refer to “black boxing,” in which scientific consensus allows scientists to state something like “smoking causes cancer” without having to defend it, because it has become accepted by the consensus based on a body of research. Based on an analysis of citation networks, they show that areas considered by expert studies to have little rivalry have “flat” levels of modularity, while more controversial ones show much more modularity. “If consensus was obtained with fragile evidence, it will likely dissolve with growing interest, which is what happened at the onset of gravitational waves research.” But consensus about climate change was reached in the 1990s. Climate change skeptics (a label which may or may not apply to Pielke) and deniers can cultivate doubt in the short run, but they’ll likely find themselves ignored in the long run.

Finally, I want to make a more general point. I often talk about how science is messy and nonlinear, and that scientists are human beings with their own interests and who sometimes make mistakes. As stated by Steven Shapin (also formerly a professor at UC San Diego) in The Scientific Revolution, any account “that seeks to portray science as the contingent, diverse, and at times deeply problematic product of interested, morally concerned, historically situated people is likely to be read as criticism of science…Something is being criticized here: it is not science but some pervasive stories we tend to be told about science” (italics in original). Sometimes scientific debates aren’t 100% about logic and data and it’s never really possible to be 0% biased. But the scientific method is the most reliable and respected system we’ve got. (A few random people might disagree with that, but I think they’re wrong.)

Water Policy Issues, with a Focus on the US Southwest

Water policy issues are very important, but we haven’t discussed them much on this blog yet. Much of my information here comes from Ellen Hanak and other analysts of the Public Policy Institute of California (PPIC), analysts from the Union of Concerned Scientists (UCS), a recent article by Christopher Ketchum in Harper’s, a book by Robert Glennon (Unquenchable), and other sources. I’m not an expert on water policy, and any errors are my own. As usual, please let me know if you notice any errors, and I’m happy to hear any comments. I’ll focus on the southwestern US (mainly because I grew up in Colorado and now live in California), but many of these issues apply elsewhere as well. And while the Southwest is dealing with drought and water scarcity, other places, such as the UK and the Midwest US, are dealing with flooding.


According to the Worldwatch Institute, already some 1.2 billion people live in areas of physical water scarcity, while another 1.6 billion face “economic water shortage”. By 2025, almost half of the world will be living in conditions of water stress. Some analysts predict that water wars (see Vandana Shiva’s book) and conflicts will increase in the future. Considering that we need water to live, it’s not surprising that the United Nations General Assembly voted in a resolution declaring that access to clean water and sanitation is a fundamental human right.

At least conditions on Earth are not as bad as Mars, which has experienced 600 million years of drought and which probably hasn’t supported life, at least on its surface. But water scarcity is an extremely important problem that we’re probably not taking seriously enough; as Stephen Colbert put it, “if the human body is 60 percent water, why am I only two percent interested?”

The Southwest and California in particular are experiencing their worst recorded drought (for example, see the NASA satellite images below). In response, the California state legislature and Gov. Brown passed a drought relief package last month, while Sen. Feinstein and others are seeking to pass a bill in Congress to aid drought-stricken states.


Now here’s some historical and legal context. The Colorado River Compact of 1922 was negotiated by members of the upper-basin states (Colorado, New Mexico, Utah, Wyoming) and the lower-basin states (Arizona, California, Nevada), and it was an agreement for hydraulic management of the Southwest. According to the US system of water rights, however, the person who first made “beneficial use” of a stream or river had first right to it. Under this doctrine, the earliest users of the Colorado River (California) could legally establish a monopoly over regional water supply, even though most of that water came from another state (Colorado). A major problem was that because 1922 happened to occur during an unusually wet period, people assumed that the Colorado held more water than it really did: its annual water flow as estimated to be 17-18 million acre feet, though it was later more accurately estimated at 14 million acre-feet (17 billion cubic-meters) on average. It was therefore already overallocated from the start. The lower basin (including southern CA) is now overusing its share of the Colorado River, and it’s not a sustainable situation. A court case (Arizona v. California) that was decided by the Supreme Court in 1963 affirmed that Arizona was owed 2.8 million acre feet of water annually, but under the doctrine of prior appropriation, Arizona’s rights would remain secondary to California’s.

For water use, it’s useful to distinguish between water withdrawal (from surface or ground sources) and the consumption of water already withdrawn. Consequently, as argued by Ellen Hanak at a recent PPIC event in Sacramento, we need to consider not just water supplies but also water management and (in)efficient water consumption. Although one usually thinks of water for drinking, washing, cleaning, and other residential uses, much more water is used for irrigation (agriculture), industry, and power plants; according to the UCS, power plants account for 41% of freshwater withdrawals in the US. It’s also useful to distinguish between direct and indirect water use, and I’ll get into that more below.

Water shortages, already a critical issue in the Southwest, are likely to become far worse with climate change (although the extent to which it’s due to climate change is still debated). Rivers such as the Colorado, which is primarily supplied by snowmelt and is already overallocated, are particularly vulnerable. For the past fourteen years, the Colorado River has been at its lowest level since the ninth century. According to Tim Barnett from UC San Diego’s Scripps Institution of Oceanography (SIO), with climate change, currently scheduled water deliveries from the Colorado River are unlikely to be met by mid-century. Rising air temperatures due to global warming will result in reduced snowfall: by the end of this century, California’s ski season could disappear with a 80% loss of Sierra snowpack, and Washington and Oregon would experience reduced snowfall as well. In addition, although per capita water use has been gradually decreasing, population growth in the Southwest is likely to increase urban water demand in some regions. In a high carbon emissions scenario, annual losses to agriculture, forestry, and fisheries could reach $4.3B in California alone, and the prices of fresh fruit, vegetables, dairy, and fish, will rise. There will be more competition between human water use and water needed to support fish and other wildlife, and potential solutions will involve difficult trade-offs. (The following figure from the EPA summarizes climate impacts on the hydrologic cycle.)


In the studies mentioned above by SIO scientists, the Colorado River’s average annual flow could decline by as much as 30% by 2050. As a result, without massively reducing water usage, Lake Mead has a 50% chance of declining to “dead pool” by 2036. At that level, water deliveries to millions of people in California and Arizona and to millions of acres of farmland will cease, and hydroelectric production at the dam will already have stopped. It is incredible to consider that this could happen in our lifetime, as the Colorado is the same river that carved the Grand Canyon over tens of millions of years, and it is one of the rivers on which the Ancient Puebloan depended until around 1300, when drought, decreased rainfall, and a drop in water table levels appeared to drive the people away from their civilization. (See also this article in National Geographic about ancient “megadroughts” in human history.)

The largest fraction of water consumption is due to agriculture, power plants, and industry. Considering the fact that we indirectly need water because of our need for energy, this points to the issue of the “water-energy nexus.” The average U.S. family of four directly uses 400 gallons of freshwater per day, while indirectly using 600-1800 gallons through power plant water withdrawals. We need energy for water production and distribution (and the desalination plant being constructed near San Diego will require quite a bit), and we also need water for energy-related infrastructure. Coal and nuclear power plants use large amounts of freshwater to cool the plants: for example, a typical 600-MW coal-fired plant consumes more than 2 billion gallons of water per year from nearby lakes, rivers, aquifers, or oceans. In addition, as we discussed in my previous blog post, fracking techniques for extracting shale gas require millions of gallons of water to be injected into a well, and they can contaminate groundwater as well. Fortunately, wind turbines and solar photovoltaic modules require essentially no water at all, but other renewable energies, like hydroelectric, bioenergy, and geothermal, can be water intensive. As argued by Laura Wisland, since we expect climate change to increase the frequency and severity of droughts in California, it will be important to hedge our electricity supplies with predictable, renewable resources, especially wind and solar.

What can be done? As a “silver lining” of the current situation, the ongoing drought in the Southwest provides a window for reform, and here are a few ideas. We should shift toward less water-intensive sources of energy such as wind and solar. Water should cost more: we should modernize water measurement and pricing with better estimates of water use and prices that reflect water’s economic value. We could learn from cities in dry places elsewhere (such as Australia) about how to make urban areas more water efficient, and we could have tiered water rates with higher prices for greater use. In agriculture, crops that cannot be grown without subsidies should not be grown. We need improvements to local groundwater management. Since surveys show that most Californians believe that there are environmental inequities between more and less affluent communities in the state, it’s also important to consider environmental justice issues while developing new water policy programs (see this article, for example). We need to develop more reliable funding (through state bonds or local ratepayers), especially for environmental management, flood protection, and statewide data collection and analysis. Finally, as argued in this PPIC report, water management agencies at all levels should aim to develop more coordinated, integrated approaches to management and regulatory oversight, drawing on scientific and technical analysis to support sound and balanced decisions.

Big Science and Big Data

I’d like to introduce the topic of “big science.” This is especially important as appropriations committees in Congress debate budgets for NASA and NSF in the US (see my previous post) and related debates occurred a couple month’s ago in Europe over the budget of the European Space Agency (ESA).

“Big science” usually refers to large international collaborations on projects with big budgets and long time spans. According to Harry Collins in Gravity’s Shadow (2004),

small science is usually a private activity that can be rewarding to the scientists even when it does not bring immediate success. In contrast, big-spending science is usually a public activity for which orderly and timely success is the priority for the many parties involved and watching.

He goes on to point out that in a project like the Laser Interferometer Gravitational-Wave Observatory (LIGO), it’s possible to change from small science to big but it means a relative loss of autonomy and status for most of the scientists who live through the transition. Kevles & Hood (1992) distinguish between “‘centralized’ big science, such as the Manhattan Project and the Apollo program; ‘federal’ big science, which collects and organizes data from dispersed sites; and ‘mixed’ big science, which offers a big, centrally organized facility for the use of dispersed teams.”

In addition to LIGO, there are many other big science projects, such the Large Hadron Collider (LHC, which discovered the Higgs boson), the International Thermonuclear Experimental Reactor (ITER), and in astronomy and astrophysics, the James Webb Space Telescope (JWST, the successor to Hubble), the Large Synoptic Survey Telescope (LSST, pictured below), and the Wide-Field InfraRed Survey Telescope (WFIRST), for example.


Note that some big science projects are primarily supported by government funding while others receive significant funding from industry or philanthropists. LSST and LIGO are supported by the NSF, JWST and WFIRST are supported by NASA, and LHC is supported by CERN, but all of these are international. In the case of the fusion reactor ITER (see diagram below), on which there was a recent detailed New Yorker article, it has experienced many delays and has gone over its many-billion-dollar budget, and it has had management problems as well. While budget and scheduling problems are common for big science projects, ITER is in a situation in which it needs produce results in the near future and avoid additional delays. (The US is committing about 9% to ITER’s total cost, but its current contribution is lower than last year’s and its future contributions may be reevaluated at later stages of the project.)

in-cryostat overview 130116

As scientists, we try to balance small-, mid-, and large-size projects. The large ones are larger than before, require decades of planning and large budgets, and often consist of collaborations with hundreds of people from many different countries. It’s important to be aware that relatively small- and mid-scale projects (such as TESS and IBEX in astronomy) are very important too for research, innovation, education, and outreach, and as they usually involve fewer risks, they can provide at least as much “bang for the buck” (in the parlance of our times).

In the context of “big science” projects these days, the concepts of “big data” and “data-driven science” are certainly relevant. Many people argue that we are now in an era of big data, in which we’re obtaining collections of datasets so large and complex that it becomes difficult to process them using on-hand database management tools or traditional data processing applications. Since the volume, velocity, and variety of data are rapidly increasing, it is increasingly important to develop and apply appropriate data mining techniques, machine learning, scalable algorithms, analytics, and other kinds of statistical tools, which often require more computational power than traditional data analyses. (For better or for worse, “big data” is also an important concept in the National Security Agency and related organizations, in government-funded research, and in commercial analyses of consumer behavior.)

In astronomy, this is relevant to LSST and other projects mentioned above. When LSST begins collecting data, each night for ten years it will obtain roughly the equivalent amount of data that was obtained by the entire Sloan Digital Sky Survey, which was until recently the biggest survey of its kind, and it will obtain about 800 measurements each for about 20 billion sources. We will need new ways to store and analyze these vast datasets. This also highlights the importance of “astrostatistics” (including my own) and of “citizen science” (which we introduced in a previous post) such as the Galaxy Zoo project. IT companies are becoming increasingly involved in citizen science as well, and the practice of citizen science itself is evolving with new technologies, datasets, and organizations.

I’ll end by making a point that was argued in a recent article in Science magazine: we should avoid “big data hubris,” the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis.

My Experience with the Congressional Visit Day

[A previous version of this first appeared as a Guest Post on the AAS Policy Blog.]

Last week, I participated in the Congressional Visit Day (CVD) with the American Astronomical Society (AAS). I was just one member in a group of eighteen AAS members—a diverse group from around the country involved in many different subspecialties of astronomical research, as well as various teaching and outreach programs. Below, is a nice photo of us is (and I’m the guy wearing a hat). Our AAS delegation was part of a larger group of scientists, engineers, and business leaders involved in a few dozen organizations participating in the CVD, which was sponsored by the Science-Engineering-Technology Work Group. Go here for a further description of our program.


As scientists and members of the AAS, we had a few primary goals. We argued first and foremost for the importance of investing in scientific research (as well as education and outreach) through funding to the National Science Foundation (NSF), NASA, and science in particular departments (especially the Depts. of Energy and Defense). If you’re interested, you can see our handout here. We also encouraged our Representatives to sign two “Dear Colleague” letters that are currently passing through the House: the first letter is by Rep. G. K. Butterfield (D-NC) and is asking for a 3% increase to NSF’s FY 2015 budget to $7.5 billion, and the second letter is by Rep. Rush Holt (D-NJ), Rep. Randy Hultgren (R-IL), and Rep. Bill Foster (D-IL) and is asking the appropriators to “make strong and sustained funding for the DOE Office of Science one of your highest priorities in fiscal year 2015.”

We also told our Congress members about our personal experiences. In my case, I have been funded by NASA grants in the past and am currently funded by a NSF grant. I am applying for additional research grants, but it’s not easy when there is enough funding available only for a small fraction of submitted grant proposals. In the past, I have also benefited from projects and telescopes that were made possible by NASA and the NSF, and I plan to become involved in new telescopes and missions such as the Large Synoptic Survey Telescope (LSST), the Wide-Field InfraRed Survey Telescope (WFIRST), and possibly the James Webb Space Telescope (JWST, the successor to the Hubble Space Telescope). Also, if a NSF grant I’ve submitted is successful (fingers crossed!), I will be able to participate more actively in public outreach programs especially in the San Diego area in addition to continuing my research.

Not only did we explain the importance of stable funding for basic research, we also talked with our legislators about how astronomy is a “gateway science” that draws people in and inspires them to learn more, become more involved, and even potentially become scientists themselves.

We talked about the importance of improving science and math literacy, which also improves US competitiveness with respect to other countries, and about how investment in science spurs innovation in industry and leads to new and sometimes unexpected developments in computing, robotics, optics, imaging, radar, you name it. Since “all politics is local,” as they say, we also emphasized that these investments in scientific research are important for strong local, as well as national, economies. As we were visiting shortly after the introduction for the President’s Budget Request (PBR) for FY 2015, we also expressed our concern that the proposed budget reduces funding for NASA’s education and outreach activities within the Science Mission Directorate by two-thirds, and would require mothballing the Stratospheric Observatory For Infrared Astronomy (SOFIA) outside of the well-established senior review process.

My Congress members are Senators Barbara Boxer and Dianne Feinstein, whose staff we met, and Representative Susan Davis (CA-53), with whom we met personally (along with a member of her staff). We had a quick photo-op too, right before she had to get back to the House chamber for a vote. I was in a group with two other astronomers who were from Oklahoma and Illinois, and we met with their respective Congress members as well. Our larger group was split into teams of three to four for the days visits, and each met with the representatives and senators of all team members.

photo 4

Senators and Representatives serve on different committees and subcommittees, each with a specific jurisdiction over parts of the federal government. For example, Sen. Boxer is on the Science & Space Subcommittee of Senate’s Commerce Committee and is the chair of the Committee on Environment & Public Works. Sen. Feinstein is chair of the Senate Appropriations Committee’s Subcommittee on Energy & Water, which has jurisdiction over the Department of Energy (among many other things). The appropriations committee is responsible for writing legislation that grants federal agencies the ability to spend money, that is, they appropriate the budgets for the agencies under their jurisdiction. Rep. Davis is a member of the House Education & Workforce Committee and has done a lot of work on educational reform, promoting youth mentoring, and civic education.

I think that we received a largely positive responsive from our congressional representatives. My three Congress members were very supportive and in agreement with our message. Some of the other members we met with, while generally positive about our message, left me with the impression that they approved of our “hard sciences” but didn’t want as much funding going to social sciences, climate science, and other particular fields. It seems to me that we must get ourselves out of this highly constrained budget environment, in which discretionary programs like those funding the sciences are capped each year; we need to either find additional sources of revenue (e.g., reducing tax breaks) or make other changes to current law.

In my previous blog post, I talked about the proposed budget and the negotiations taking place in Congressional committees. We also need to consider the current political situation with the upcoming mid-term elections. Once a budget (which may be significantly different than the PBR) is passed by the House and Senate Appropriations Committees, it will be considered by the House and Senate, which are currently controlled by Republicans and Democrats (who have 53 seats plus 2 independents who caucus with them). However, it appears possible that Republicans may retake the Senate in the 114th Congress, and in that case their leadership may resist even small additions to the current budget request and may attempt to simply pass a “continuing resolution” instead.

On the same day as our CVD (26th March), Office of Science and Technology Policy Director John Holdren appeared before the House Committee on Science, Space, and Technology, where there were considerable disagreements among the committee members about STEM education, SOFIA, and other issues. (Note that the committee is particularly polarized and has been criticized for its excessive partisanship and industry influence.) Fortunately, on the following day, a hearing before House appropriators on the NSF budget request fared better. This is encouraging, but in any case it will be a difficult struggle to produce a good budget (that is, good for science) within a short time-scale.