The Proposed Fiscal Year 2015 Budget: Thoughts on its Implications for Science

I’d like to make a few comments on the proposed US federal budget for Fiscal Year 2015 (FY15, which starts in October), especially on its implications for science research and education in this country. First, I’ll acknowledge articles and blogs by Matt Hourihan (at the American Association for the Advancement for Science) and Josh Shiode (at the American Astronomical Society), which I’ve used for some of the information and figures below. I’m responsible though if I’ve misstated any facts or numbers, and as usual, any opinions I express about the current state of affairs are my own. I look forward to discussing these issues with scientists and other interested people, and as usual, you’re welcome to write or send me comments.

President Obama’s administration officially released its President’s Budget Request (PBR, but not the beer!) on 4th March, and the details are available on the White House’s website. The PBR is formulated by the Office of Management and Budget (OMB), and it soon be evaluated and revised by the Appropriations Committees in Congress. The White House’s Office of Science & Technology Policy (OSTP) plays a role in developing the budget, but naturally there are many other considerations involved as well, such as ensuring national security, strengthening the economy, maintaining healthcare and education programs, etc. Nonetheless, from the perspective of science research and education, the budget certainly could be better.

15p R&D Pie_AAAS

Unfortunately, the Budget Control Act puts spending caps on support for research and development (R&D). Assuming little to no additional revenue, there is not much room in the discretionary budget above FY 2014 levels. With three-quarters of the post-sequester spending reductions still in place (see my previous blog post), many agency R&D budgets are stagnant. The $3.901 trillion budget includes $136.5 billion for R&D, which is a 0.5% increase over FY 2014 but that doesn’t account for the 1.7% inflation rate. The divisions by agency are described by the above pie chart (courtesy: AAAS) and in this article. Funding for the physical sciences largely comes from the National Science Foundation (NSF), NASA, the Department of Energy (DOE) Office of Science, and other agencies and departments. Total research funding (basic+applied research) has dropped 1.9% below FY 2014 levels, which is only slightly above FY 2013 post-sequester levels.

budget_diffs_WP

The President has also proposed additional $56B of funding through the Opportunity, Growth, and Security Initiative (OSGI), which would help the situation for many agencies, but it appears that Congress won’t have the stomach for it. As can be seen in the figure above (courtesy: Washington Post), an additional difficulty comes from differences between the revenue projections of the President and the Congressional Budget Office (CBO); the former assumes revenue increases from some reduced tax breaks for wealthy Americans, to which Congress likely won’t agree. In that case, we may be headed back toward sequestration funding levels in FY 2016.

RandDprojections_AAAS

The Association of American Universities (AAU) and the American Astronomical Society (AAS, of which I’m a member) have expressed some criticism of the proposed budget: while they acknowledge the caps on discretionary spending, they argue that basic research and education could receive higher priority. A surprising cut that was proposed was to the Stratospheric Observatory for Infrared Astronomy (SOFIA), which is an aircraft telescope. The axing of SOFIA in 2015 is particularly vexing for astronomers because it occurred outside the established review process. The FY 2014 budget proposed a controversial government-wide reorganization of science, technology, engineering and mathematics (STEM) education programs, and this year’s budget includes a surprising cut (by 2/3!) to the STEM education budget within NASA’s Science Mission Directorate (SMD). Time will tell how education programs adapt to these changes, but cuts like these potentially hurt US competitiveness relative to Europe and East Asia as well as efforts toward improving science and math literacy.

According to Jack Burns (U. of Colorado, Boulder), “by lowering overall spending on the astronomical sciences, the Administration threatens the health of our technical workforce and the education and training of the next generation of space scientists. This is hard to swallow at a time when other countries are increasing their investments in science and technology.” Similarly, in Science magazine, William Press argues that, “it appears that [nations] who spend close to 3% of their GDP on R&D are the ones that compete most successfully. The United States is in that club now. We don’t want to fall out of it.”

I’m most interested in astronomy/astrophysics, because it’s my field, but other fields are affected as well. For example, the budget of the National Institutes of Health (NIH) only received a sub-inflationary increase (like most agencies), and the proposed budget includes a substantial cut to fusion energy research and to the US contribution to the International Fusion Experiment (ITER), though funding for energy efficiency and renewables would increase. The Environmental Protection Agency (EPA) would also receive a cut in this budget.

fedspending_AAAS

Finally, as this bar graph shows, the budget prospects for nondefense discretionary spending will likely worsen in the coming years. “Mandatory spending” is controlled by different mechanisms than discretionary spending, and it includes Medicare, Medicaid, Social Security, which are large programs, as well as food stamps, unemployment compensation, and other smaller ones. As a fraction of GDP, we can expect mandatory spending to continue increasing. On this point, I’ll first say that in my personal view, I’m wary of those who criticize these programs (or who refer to them pejoratively as “entitlements”), because such criticisms give space for extreme conservatives who would rather gut these programs and let the poor, ill, hungry, and elderly suffer on their own. Nonetheless, it appears that, the way that they are currently funded, the cost of Medicare and Medicaid programs is growing at an unsustainable rate (faster than inflation). The Affordable Care Act is helping, but it’s probably insufficient to resolve this situation, especially as more baby boomers draw on retirement and health care benefits. Long-term fiscal problems remain.

We also need to consider the current political situation in Congress. I participated in a Congressional Visit Day with the AAS this week, and I’ll soon write my next blog post about that.

Paradigm Shifts?

In addition to physics and astronomy, I used to study philosophy of science and sociology. In my opinion, many scientists could learn a few things from sociologists and philosophers of science, to help them to better understand and consider how scientific processes work, what influences them and potentially biases scientific results, and how science advances through their and others’ work. In addition, I think that people who aren’t professional scientists (who we often simply call “the public”) could better understand what we are learning and gaining from science and how scientific results are obtained. I’ll just write a few ideas here and we can discuss these issues further later, but my main point is this: science is an excellent tool that sometimes produces important results and helps us learn about the universe, our planet, and ourselves, but it can be a messy and nonlinear process, and scientists are human–they sometimes make mistakes and may be stubborn about abandoning a falsified theory or interpretation. The cleanly and clearly described scientific results in textbooks and newspaper articles are misleading in a way, as they sometimes make us forget the long, arduous, and contentious process through which those results were achieved. To quote from Carl Sagan (in Cosmos), who inspired the subtitle of this blog (the “pale blue dot” reference),

[Science] is not perfect. It can be misused. It is only a tool. But it is by far the best tool we have, self-correcting, ongoing, applicable to everything. It has two rules. First: there are no sacred truths; all assumptions must be critically examined; arguments from authority are worthless. Second: whatever is inconsistent with the facts must be discarded or revised.

As you may know, the title of this post refers to Thomas Kuhn (in his book, The Structure of Scientific Revolutions). “Normal science” (the way science is usually done) proceeds gradually and is based on paradigms, which are collections of diverse elements that tell scientists what experiments to perform, which observations to make, how to modify their theories, how to make choices between competing theories and hypotheses, etc. We need a paradigm to demarcate what is science and to distinguish it from pseudo-science. Scientific revolutions are paradigm shifts, which are relatively sudden and unstructured events, and which often occur because of a crisis brought about by the accumulation of anomalies under the prevailing paradigm. Moreover, they usually cannot be decided by rational debate; paradigm acceptance via revolution is essentially a sociological phenomenon and is a matter of persuasion and conversion (according to Kuhn). In any case, it’s true that some scientific debates, especially involving rival paradigms, are less than civil and rational and can look something like this:
calvin_arguing

I’d like to make the point that, at conferences and in grant proposals, scientists (including me) pretend that we are developing research that is not only cutting edge but is also groundbreaking and Earth-shattering; some go so far as to claim that they are producing revolutionary (or paradigm-shifting) research. Nonetheless, scientific revolutions are actually extremely rare. Science usually advances at a very gradual pace and with many ups and downs. (There are other reasons to act like our science is revolutionary, however, since this helps to gain media attention and perform outreach in the public, and it helps policy-makers to justify investments in basic research in science.) When a scientist or group of scientists does obtain a critically important result, it is usually the case that others have already produced similar results, though perhaps with less precision. Credit often goes to a single person who packaged and advertised their results well. For example, many scientists are behind the “Higgs boson” discovery, and though American scientists received the Nobel Prize for detecting anisotropies in the cosmic microwave background with the COBE satellite, Soviets actually made an earlier detection with the RELIKT-1 experiment.

einstein-bohr

Let’s briefly focus on the example of quantum mechanics, in which there were intense debates intense debates in the 1920s about (what appeared to be) “observationally equivalent” interpretations, which in a nutshell were either probabilistic or deterministic and realist ones. My favorite professor at Notre Dame, James T. Cushing, wrote a provocative book on the subject with the subtitle, “Historical Contingency and the Copenhagen Hegemony“. The debates occurred between Neils Bohr’s camp (with Heisenberg, Pauli, and others, who were primarily based in Copenhagen and Göttingen) and Albert Einstein’s camp (with Schrödinger and de Broglie). Bohr’s younger followers were trying to make bold claims about QM and to make names for themselves, and one could argue that they misconstrued Einstein’s views. Einstein had essentially lost by the 1930s, in which the nail in the coffin was von Neumann’s so-called impossibility proof of “hidden variables” theories–a proof that was shown to be false thirty years later. In any case, Cushing argues that in decisions about accepting or dismissing scientific theories, sometimes social conditions or historical coincidences can play a role. Mara Beller also wrote an interesting book about this (Quantum Dialogue: The Making of a Revolution), and she finds that in order to understand the consolidation of the Copenhagen interpretation, we need to account for the dynamics of the Bohr et al. vs. Einstein et al. struggle. (In addition to Cushing and Beller, another book by Arthur Fine, called The Shaky Game, is also a useful reference.) I should also point out that Bohr used the rhetoric of “inevitability” which implied that there was no plausible alternative to the Copenhagen paradigm. If you can convince people that your view is already being adopted by the establishment, then the battle has already been won.

More recently, we have had other scientific debates about rival paradigms, such as in astrophysics, the existence of dark matter (DM) versus modified Newtonian dynamics (MOND); DM is more widely accepted, though its nature–whether it is “cold” or “warm” and to what extent it is self-interacting–is still up for debate. Debates in biology, medicine, and economics, are often even more contentious, partly because they have policy implications and can conflict with religious views.

Other relevant issues include the “theory-ladenness of observation”, the argument that everything one observes is interpreted through a prior understanding (and assumption) of other theories and concepts, and the “underdetermination of theory by data.” The concept of underdetermination dates back to Pierre Duhem and W. V. Quine, and it refers to the argument that given a body of evidence, more than one theory may be consistent with it. A corollary is that when a theory is confronted with recalcitrant evidence, the theory is not falsified, but instead, it can be reconciled with the evidance by making suitable adjustments to its hypotheses and assumptions. It is nonetheless the case that some theories are clearly better than others. According to Larry Laudan, we should not overemphasize the role of sociological factors over logic and the scientific method.

In any case, all of this has practical implications for scientists as well as for science journalists and for people who popularize science. We should be careful to be aware of, examine, and test our implicit assumptions; we should examine and quantify all of our systematic uncertainties; and we should allow for plenty of investigation of alternative explanations and theories. In observations, we also should be careful about selection effects, incompleteness, and biases. Finally, we should remember that scientists are human and sometimes make mistakes. Scientists are trying to explore and gain knowledge about what’s really happening in the universe, but sometimes other interests (funding, employment, reputation, personalities, conflicts of interest, etc.) play important roles. We must watch out for herding effects and confirmation bias, where we converge and end up agreeing on the incorrect answer. (Historical examples include the optical or electromagnetic ether; the crystalline spheres of medieval astronomy; the humoral theory of medicine; ‘catastrophist’ geology; etc.) Paradigm shifts are rare, but when we do make such a shift, let’s be sure that what we’re transitioning to is actually our currently best paradigm.

[For more on philosophy of science, this anthology is a useful reference, and in particular, I recommend reading work by Imre Lakatos, Paul Feyerabend, Helen Longino, Nancy Cartwright, Bas van Fraassen, Mary Hesse, and David Bloor, who I didn’t have the space to write about here. In addition, others (Ian Hacking, Allan Franklin, Andrew Pickering, Peter Galison) have written about these issues in scientific observations and experimentation. For more on the sociology of science, this webpage seems to contain useful references.]

The Physics of Sustainable Energy

I attended a conference this weekend called “The Physics of Sustainable Energy” at the University of California, Berkeley. It was organized by people affiliated with the American Physical Society, Energy Resources Group, and a couple other organizations. Most of the speakers and attendees (including me) seemed to be Californians. I had some interesting conversations with people and attended some great talks by experts in their fields, and here I’ll just give you a few highlights.

First though, I want to make two general comments. I did notice that only ~20% of the speakers were women, which is worse than astrophysics conferences, and it’s too bad the organizers weren’t able to make the conference more diverse. (There were a few people of color speaking though.) Secondly, I think it’s excellent that people (and not just in California) are actively involved working on solutions and innovations, but I think we should be careful about an technophilic or technocratic emphasis. This was a conference for physicists and engineers though, and energy policy and communication with the media and policy-makers, for example, were mostly beyond the scope of it. I was struck by the apparently close ties with industry some speakers had (such as Amory Lovins and Jonathan Koomey); to some extent that’s necessary, but I was a little concerned about potential conflicts of interest.

…On to the conference. Ken Caldeira spoke about the global carbon balance. When accounting for CO2 emission per capita from fossil fuel use and cement production: the US is worst (50kg CO2/person/day), followed by Russia, China and the EU. California emits half as much as the rest of this country, but 2/3 of the difference is due to a fortunate climate (it doesn’t get very cold); according to an audience member (Art Rosenfeld?), “we’re mostly blessed with good luck as well as some brains.” Daniel Kammen (one of the organizers) then talked about developing a framework for energy access for all. According to the International Panel on Climate Change (IPCC AR4 in 2007): “warming will most strongly and quickly impact the global poor.” Kammen described the concept of “energy poverty”: 1.4 billion people lack access to electricity today, and that will still be the case for a similar number in 2030, with more having unreliable/intermittent access. There appears to be a strong correlation between electricity access and human development index.

VAWTs

It seems that many people are working on interesting research & development on renewable energy sources. Jennifer Dionne spoke about the “upconversion” of solar cells, which includes thermodynamic, electronic, and photonic design considerations. The upconversion process improves cell efficiency by at least 1.5× (see Atre & Dionne 2011), and it often works well at optical near-infrared wavelengths. (She pointed out that of energy from the sun, 5% is in the UV, 43% in optical, and 52% in infrared. And if you’re interested in what those proportions are like for different types of galaxies, check out my recent paper.) Then Chris Somerville spoke about the status and prospects of biofuels, the production of which is currently dominated by the US and Brazil. The combustion of biomass has challenges for providing low-carbon energy: depends on tilling of soil, land conversion, fertilizer, transportation, and processing. I’m concerned about deforestation and effects on ecosystems as well as the effects on food/crop prices (remember the food riots in 2007-2008 and the rising cost of corn/maize?). In my opinion, Somerville didn’t sufficiently address this, though he did argue in favor of miscanthus and other biomass rather than the use of corn. John Dabiri spoke about the advantages of vertical-axis wind turbines (called VAWTs, see the figure above), in addition to the ubiquitous horizontal-axis variety. VAWTs have a smaller structure size and cost, simpler installation logistics, and are safer for birds and bats as well. Currently only four countries get >10% of their electricity from wind (Spain, Portugal, Ireland, Denmark, followed by Germany with 9%), but this can be easily improved.

LLNL_Flow-Chart_2012

This flow diagram is pretty nice, and it describes current energy use in the US (presented by Valerie Thomas). And Daniel Kammen, in a paper on the relation between energy use, population density, and suburbanization, shows the spatial distribution of carbon footprints (where the units are tCO2e, or total carbon dioxide equivalent per household).

JonesKammen2014_Fig1

Tilman Santarius give a nice talk about energy efficiency rebound effects, which is closely related to my previous post, where you can find more information. He discussed the interactions between energy efficiency, labor productivity, human behavior, and economic growth, and he distinguished between rebounds due to an income effect vs a substitution effect. In any case, average direct rebound effects appear to be around 20-30% (Greening et al. 2000; Sorell 2007), in addition to a 5-10% of indirect rebound. In other words, around 1/3 of income savings due to energy efficiency is lost because of an increase in energy demand. He also talked about the psychology of rebounds, including moral licensing (such as Prius drivers who drive more) and moral leakage (people feel less responsible). It will be a difficult task to try to separate energy demand from economic growth.

There were many other interesting talks, but I’ll end with the issue of climate adaptation and geoengineering. Ann Kinzig described how the combined risk of a phenomenon is the sum Σ p (event) × impact (event). Mitigation seeks to reduce the probability p while adaptation seeks to reduce the impact. Climate change will have impacts on food, water, ecosystems, and weather events, and decision-makers in urban areas can try to prepare for these (see this website). Kinzig also spoke about historical case studies of failed adaptations by people in the Hohokam (Arizona), Mesa Verde (Colorado), and Mimbres (New Mexico) regions, and the dependence on societal hierarchy and conformity. Alan Robock spoke about the risks and benefits of “geoengineering”, which involves gigantic projects in the future to address climate change, such as space-based reflectors, stratospheric aerosols, and cloud brightening (seeding clouds), and basically involve using the Earth as a science experiment with a huge cost of failure. In particular, he studies the many problems of injecting sulfate aerosols into the stratosphere to cool the planet. (Some people have supported this idea because of the supposedly benign effects of volcanic eruptions in the past.) He discussed the potential benefits of stratospheric geoengineering but compiled a list of 17 risks, including drought in Africa and Asia, continued ocean acidification, ozone depletion, no more blue skies, military use of technology, ruining terrestial optical astronomy, moral issues, and unexpected consequences. For more on Robock’s research and for other useful references, go here.

Robock_figure

An introduction to “space security”

I’m curious about what people refer to as “space security”, as well as space policy and sustainability, and if you’re interested, you can learn with me. This post will just be an introduction to some of the issues involved. Note that I’m not an expert on many of these issues, so take my comments and thoughts with a grain of salt.

images

The idea of “space security” might conjure images of invading aliens, but as much fun as that is, that’s not what I’m talking about. I’m also not planning on talking about killer asteroids and dangerous radiation, though these are much less far-fetched. For example, the Pan-STARRS survey (of which I was briefly a member a few years ago) received funding from NASA to assess the threat to the planet from Near Earth Objects, some of which pass closer to us than the moon. (A limitation of Pan-STARRS, however, was that images that happened to contain passing satellites had software applied to black out or blur the pixels in the region.) On the other hand, solar flares can produce “coronal mass ejections” and intense cosmic rays that could be hazardous to spacecraft but on Earth we’re somewhat protected by our atmosphere and magnetosphere. This and other forms of “space weather” could be the subject of another post later.

I’d like to talk about the issue of satellites, as well as weapons and reactors, in space. More than 5,000 satellites have been launched into orbit and about 1,000 are in operation today. The act of destroying a satellite or of colliding satellites can damage the space environment by creating dangerous amounts of debris. (If you’ve seen the Oscar-winning Gravity, then you know that debris from satellites can be a serious problem.) For example, in a demonstration of an anti-satellite weapon in 2007, China destroyed one of its own satellites; the resulting “space junk” then struck and destroyed a small Russian satellite last year. The following computer-generated images of the growing number of objects in low-earth orbit (courtesy of the NASA Orbital Debris Office) illustrates the problem. Only 5% of the objects are satellites; the rest are debris. Currently more than 21,000 pieces of debris larger than 10cm are being tracked, and there are as many as 500,000 additional untracted pieces larger than 1cm.

Satellites and orbital debris_500x350

In addition, the loss of an important satellite could create or escalate a conflict, especially during a time of tension between states. The US and other countries possess “anti-satellite” weapons (ASATs) and have or are considering space-based missile defense systems. Attacks on satellites are a very real possibility, and it is important to beware of the destabilizing effects and potential for proliferation with such weapons. Moreover, since the Cold War, the US and other governments have considered deploying nuclear reactors on spacecraft, which have proven to be controversial (such as the dubiously named Project Prometheus, which was cancelled in 2006); an intentionally or unintentionally damaged nuclear reactor in space could have major consequences.

Considering that we are increasingly dependent on satellites and that there are military, commercial, and civil interests in space, how can we attempt to ensure space security and sustainability in the future? In the US, the Obama administration has a National Space Policy, which was released in June 2010. The policy mainly consists of: (1) limit further pollution of the space environment; (2) limit objects from colliding with each other and/or exploding; (3) actively removing high-risk space debris. The policy a good start, but much more could be done. An emphasis on international cooperation rather than unilateral action would help; space debris are clearly a global problem requiring global solutions. It is also important to negotiate on the control of space weapons. The US and other space powers should declare that they will not intentionally damage or disable satellites operating in accordance with the Outer Space Treaty and that they will not be the first to station weapons in space. Moreover, “space situational awareness” (SSA), which allows for the coordination of space traffic, can be improved in collaboration with other countries, and satellites can be made less vulnerable to collision or attack. Finally, the US should play an active role in negotiations with the international community on space security and sustainability. The United Nations has the Committee on the Peaceful Uses of Outer Space (COPUOS), with 76 member states, has been working on a variety of programs to improve the long-term sustainability of space activities, and in particular, to develop and adopt international standards to minimize space debris.

The Future of Fracking in California

I attended an interesting forum at UC San Diego on Thursday, and this post is based on that. It was titled, “The Future of Fracking in California: Energy, Environment and Economics,” and the speakers included: Taiga Takahashi, Associate in the San Diego office of Latham & Watkins; Mark Ellis, Chief of Corporate Strategy for San Diego-based Sempra Energy; and Andrew Rosenberg, Director of the Center for Science and Democracy at the Union of Concerned Scientists. I’ll just summarize some of the more important points people made (based on my incomplete notes), and you can decide what you think of them.

UCS-fracking-report-Fig1

Taiga Takahashi described the legal situation in California vis-à-vis hydraulic fracturing (fracking). Governor Jerry Brown supports “science-based fracking” that is protective of the environment. Brown also touts the economic benefits, including the creation of 2.8 million jobs (though this figure was disputed). In contrast, the CA Democratic party supports a moratorium on fracking. The bill SB 4 on well stimulation was passed in September requires the state Department of Oil, Gas & Geothermal Resources (DOGGR) to adopt regulations regarding water well testing and other tests of air and water pollution. New regulations will be developed by January 2015 while an environmental impact study will be completed six months afterward (my emphasis). Fracking restrictions are mostly similar to those in Colorado and much better than those in Pennsylvania. Takahashi argued that a “consensus approach” on fracking regulation in CA could be reached, which would include nongovernmental organizations (NGOs), the state, and industry.

Mark Ellis is a representative of industry. Sempra Energy is a major natural gas utility that owns San Diego Gas & Electric and Southern California Gas. Ellis argued that the “shale revolution” (his term) has made gas cheap relative to oil and thereby reduced prices. Gas is used mostly for power, since many are making a switch from coal to gas, as well as in industry and residential areas. There are also opportunities for using gas in transportation, such as with compressed/liquefied natural gas (LNG). Sempra is expanding production and building pipelines from Texas and Arizona to Mexico. Ellis argued that the “shale revolution” is being or could be replicated in other places, such as the UK, Australia, Brazil, and Russia.

Andrew Rosenberg spoke about a couple recent Union of Concerned Scientists (UCS) reports: “The Curious Case of Fracking: Engaging to Empower Citizens with Information” and “Toward an Evidence-Based Fracking Debate,” written by Pallavi Phartiyal, him, and others. He brought up many issues, such as the use of pipeline infrastructure vs trains and the relation between fracking, chemical plants, and oil. Importantly, fracking is a many-step process (as you can see in the figure at the top of this post), which includes water acquisition, chemical transport and mixing, well drilling and injection, a wastewater pit, onsite fuel processing and pipelines, nearby community residences and residential water wells, and waste transport and wastewater injection. The most important point he made is that we as a society must decide when particular actions are worth the risks, and to what extent those risks can be mitigated with regulations. There should be as much transparency as possible and plenty of opportunities for public comment. It’s important to close loopholes in federal environmental legislation; disclose the chemical composition, volume, and concentration of fracking fluids and wastewater; we require baseline and monitoring requirements for air water, and soil quality; make data publicly accessible; and engage citizens and address their concerns. (My views were mostly in agreement with Rosenberg’s. Full disclosure: I am an active member of UCS.)

After the speakers, there were a few comments and questions. I was surprised that this was the only time during the forum that climate change issues were raised. The issue of water usage was discussed as well, because of our ongoing drought. (In related news, Gov. Brown and the state Legislature just passed a drought relief package.) It also was clear that Sempra and other companies wouldn’t voluntarily make changes unless industry-wide regulations were applied; Ellis argued that singling out particular companies is counter-productive. It’s possible that there will be new Environmental Protection Agency (EPA) regulations on water and air pollution in the future.

The fracking debates in California continue. For example, the Los Angeles City Council is taking steps toward a fracking ban, and a rally against fracking is being organized at the Capitol in Sacramento in two weeks.