Comet Update! Rosetta’s Philae landed, but not as planned

Now here’s what you’ve been waiting for! You really need more comet, like Christopher Walken/Bruce Dickinson needs more cowbell, so here you go…

In a blog post few months ago, I told you about the European Space Agency’s (ESA’s) Rosetta mission. Nine years after its launch and after four gravity assists, Rosetta reached the comet 67P/Churyumov-Gerasimenko and began to orbit it. On 11th November, Rosetta maneuvered its position and trajectory to eject its washing machine-sized lander, Philae, which sallied forth and landed on the comet the next day, and MADE HISTORY! (Wired‘s apt headline, “Holy Shit We Landed a Spacecraft on a Comet,” beat The Onion, which is known for that sort of thing.) Its landing was confirmed at ESA’s Space Operations Centre in Darmstadt, Germany at 17:03 CET that day.

Lander_departure

Above you can see Philae on its fateful journey, and below you can see its first image of the comet, both courtesy of ESA. The landing happened to take place while friends of mine were at a Division of Planetary Sciences meeting in Tucson, Arizona, and we and others discussed the Philae landing at Friday’s Weekly Space Hangout with Universe Today. And if you’re interested in more information than what I’ve written here, then check out the ESA Rosetta blog and posts by Emily Lakdawalla, Matthew Francis, and Phil Plait.

First_comet_panoramic_node_full_image_2

From what we can tell, Philae did initially touch down in its predicted landing ellipse (its planned landing zone) but its harpoons—which were supposed to latch onto the surface—failed to fire, and it bounced! Considering how small the comet is and how weak its gravitational force (about 100,000 weaker than on the Earth), this could have been the end as the lander could then have floated away, never to be seen again. However, after nearly two hours, it landed again…and bounced again, and a few minutes later finally settled on the surface and dug in its ice screws, about 1 km from its intended landing spot on a comet 4 km in diameter. (This would be like trying to land a plane in Honolulu and ending up on another island—it’s unfortunate but at least you didn’t drown.)

ESA_Rosetta_OSIRIS_FirstTouchdown-1024x702

At first, it wasn’t clear exactly where Philae actually was; it could have dropped into a crater where it would be nearly impossible to find. But then based on images from the OSIRIS camera and NavCam (navigational camera) on Rosetta, ESA scientists were finally able to locate it a couple days ago. The mosaicked images above came from the OSIRIS Team, and the NavCam image below as annotated by Emily Lakdawalla, to give the larger-scale context. After its last bounce, Philae rotated and headed “east”, finally becoming settled among dust-covered ice at the bottom of a shadowed cliff. It’s not an ideal position but at least it’s not totally precarious. (They considered securing the position with the harpoons, but the momentum from firing them could push the lander back up into space, which would be “highly embarrassing” according to Stephan Ulamec, head of the lander team.)

20141117_Comet_on_14_September_2014_-_NavCam_annotated_postlanding_f840

But the cliff situation is a problem. Philae’s battery had a little more than two days of juice in it, and once that ran out, it would be dependent on its solar panels. However, Philae’s current position only receives about 1.5 hours of light per 12-hour rotation of the comet, much less than hoped. Philae did attempt to run some of its experiments and activities during the time allotted, the battery ran late on Friday. This was @Philae2014’s last tweet: “My #lifeonacomet has just begun @ESA_Rosetta. I’ll tell you more about my new home, comet #67P soon… zzzzz #CometLanding”

Before Philae dreamt of electric sheep, it managed to collect some data using instruments on board. (See this Nature news article.) For example, Philae deployed its drilling system (SD2) as planned, in order to deliver samples to the COSAC and Ptolemy instruments, which probe organic molecules and water (and which I described in my previous Rosetta post). But ESA scientists don’t know how much material SD2 actually delivered to the lander; if the ground is very dense, it’s possible that since Philae isn’t totally anchored, it could have moved the lander rather than drilling into the surface. We do know for sure that some instruments operated successfully, such as the downward-looking ROLIS camera and ROMAP, the magnetic field mapping system.

In any case, scientists have obtained some data already while other data stuck on the snoozing lander will be retrieved later. In the meantime, Rosetta is keeping busy and continues to take observations. Philae has already been a success, and who knowsmaybe it will “wake up” when its solar panels absorb enough sunlight to recharge the batteries.

[Note that the NavCam images we’ve seen so far are pretty good, but I have heard that Rosetta scientists have much better resolution color images that are embargoed and won’t be released for six months. I haven’t confirmed this fact yet, so if you have more up-to-date info, please let me know.]

Finally, I’ll end with some comments about what some people are referring to as #shirtstorm or #shirtgate. (For more info, see this Guardian article and this blog post and this one.) On the day of the worldwide live-stream broadcast last week, Matt Taylor, the Rosetta Project Scientist, wore a shirt covered with scantily clad women. I get the impression that Taylor is a cool guy and wants to get away from the scientist stereotypes people have, but this is completely inappropriate. (And he’s worn this shirt to work before. Apparently none of his colleagues told him to leave it at home.) But it’s not just the shirt; during the middle of his broadcast, Taylor referred to Rosetta as the “sexiest” mission. “She’s sexy, but I never said she was easy.”

b288d9a2-6fc6-4f6c-b143-09bc706bfaa0-620x372

We debated many aspects of this on astronomers’ official and unofficial social media, and for the most part, our community is very unhappy about this. You may say that we should focus on the science, and who cares about what this scientist wears or says when he’s excited about his mission’s success. But we have been working really hard to increase diversity in STEM fields and to achieve gender equality in science. Many aspects to working in the current scientific establishment are not particularly welcoming to women, and Matt Taylor’s shirt and poor choice of words are part of the problem. A few days later, Taylor made a heartfelt apology. As far as I know, ESA itself has not issued an official apology yet. The American Astronomical Society made a statement today (Wednesday) that “We wish to express our support for members of the community who rightly brought this issue to the fore, and we condemn the unreasonable attacks they experienced as a result, which caused deep distress in our community. We do appreciate the scientist’s sincere and unqualified apology.”

In any case, our focus is on the science and on this amazing scientific achievement. But Science is for everyone.

Three Astrophysicists (including me) Meet with Congresswoman Davis

Last Tuesday, three weeks before the midterm election, three astrophysicists—graduate students and Ph.D. candidates Darcy Barron and Evan Grohs and I (a research scientist)—met with Representative Susan Davis (CA-53) and her staffer, Gavin Deeb. We had a twenty-minute meeting to talk about science in her district office in North Park, San Diego, which is on Adams Avenue and biking distance from my home. Darcy and I are her constituents, while Evan is a constituent of Rep. Scott Peters (CA-52), who is also a science advocate but is in a tight election race.

photo 1

I enjoyed participating in the Congressional Visit Day in Washington, DC, earlier this year (and Darcy had previously participated in the program too). In March, Josh Shiode (AAS Public Policy Fellow) and I had a short meeting with Rep. Davis and one of her DC staffers. This time in her San Diego district though, we had more time to chat. As before, she was very receptive to our message for federal investment in basic research, education and public outreach in the astronomical sciences and in science in general.

The current science budget situation and constraints from the ongoing “sequestration” leaves Congress and the Executive branch with little wiggle room, but we need to make the best of a bad situation. Otherwise, the US risks dropping behind Europe, Japan, and China in astrophysics research and in educating the next generation of scientists. Most federal funding for astronomy and astrophysics comes from the National Science Foundation (NSF), NASA, and the Department of Energy (DOE) Office of Science. Rather than improving and increasing these agencies’ constrained budgets, unfortunately Congress became mired in gridlock with little time before the election, and to avoid another government shutdown, Congress members had to vote on a “continuing resolution,” which basically keeps the budget on autopilot. Unless budget negotiations become an immediate priority after the election, it seems we’ll have to wait until FY 2016 to try to improve science budgets.

Rep. Davis stressed the importance of science communication, outreach, and improving diversity of the scientific workforce, and we were all in agreement about that. Communicating science to the public well helps to remind people how awesome science is and how important our investment in it is. And in our outreach efforts, the young and diverse students we reach and hope to inspire will be the people who advance science in the future. Rep. Davis was clearly interested in these issues and supportive of our and our colleagues’ work on them.

A couple months ago, Senator J. Rockefeller (D-WV), chair of the Committee on Commerce, Science, and Transportation, introduced the America COMPETES Reauthorization Act of 2014. According to the Association of American Universities, the bill calls for “robust but sustainable funding increases for the [NSF] and National Institute of Standards and Technology” (NIST) and it “recognizes the past success and continuing importance of the NSF’s merit review process.” It also supports each agency’s efforts to improve education of future science, technology, engineering, and math (STEM) professionals. But as Jeffrey Mervis of Science points out, support for COMPETES wasn’t sufficiently bipartisan and hasn’t been reauthorized.

On the other hand, perhaps there’s a better chance of Congress reauthorizing the Higher Education Act. The HEA is the major law that governs federal student aid, and it’s been reauthorized nine times since Pres. Johnson signed it into law in 1965. Considering that at least 70% of US university graduates are burdened with debt, this is clearly important. The HEA bill, introduced by Sen. Harkin (chair of the Health, Education, Labor and Pension Committee), would provide some relief for students by increasing state contributions to public universities (and thereby reduce tuition fees), supporting community colleges, and expanding programs that allow high school students to earn college credits. Disagreements between Democrats and Republicans remain on this bill, and we’ll have to wait and see in what form it will be passed.

We didn’t get into all these details, but I just wanted to give you some context. We also briefly discussed the need for graduate education reform and for preparing graduate students for the difficult job markets they face. These issues aren’t addressed in the HEA, though that bill would benefit some grad students who would have decreased loan burdens.

In any case, we’ve got to continue our work and our scientific advocacy, and after the November election, we hope that Rep. Davis, Rep. Peters (or DeMaio), and other Congressional lawmakers can get back together and negotiate a better budget for basic research, education, and public outreach in the physical and social sciences.

Rise of the Giant Telescopes

The biggest telescope ever constructed, the Thirty Meter Telescope (TMT), officially broke ground on Mauna Kea in Hawai’i on Tuesday. Building on technology used for the Keck telescopes, the TMT’s primary mirror will be segmented combining 492 hexagonal reflectors that will be honeycombed together, and it will have an effective diameter of 30 meters, as you’ve probably guessed. (Astrophysicists come up with very descriptive names for their telescopes and simulations.) 30 meters is really really big—about a third the length of an American football field and nearly the size of a baseball diamond’s infield. When it’s built it will look something like this:

url

(If you’re interested, here’s a shameless plug: we discussed the TMT’s groundbreaking on the Weekly Space Hangout with Universe Today yesterday, and you can see the video on YouTube.)

The groundbreaking and blessing ceremony, which included George Takei hosting a live webcast, didn’t go quite as planned. It was disrupted by a peaceful protest of several dozen people who oppose the telescope’s construction. The protesters chanted and debated with attendees and held signs with “Aloha ‘Aina” (which means ‘love of the land’) and using TMT to spell out “Too Many Telescopes.” There has been a history of tension over what native Hawaiians say is sacred ground in need of protection and is also one of the best places on Earth to place telescopes. This is a longstanding issue, and the tension between them back in 2001 was reported in this LA Times article. According to Garth Illingworth, co-chair of the Science Advisory Committee, “It was an uncomfortable situation for those directly involved, but the way in which the interactions with the protesters was handled, with considerable effort to show respect and to deal with the situation with dignity, reflected credit on all concerned.” In any case, construction will continue as planned.

20141008_015_MAUNAKEA_PROTESTS

The TMT’s science case includes observing distant galaxies and the large-scale structure of the early universe, and will enable new research on supermassive black holes, and star and planet formation. The TMT is led by researchers at Caltech and University of California (where I work), and Canada, Japan, China, India. Its optical to near-infrared images will be deeper and sharper than anything else available, with spatial resolution twelve times that of the Hubble Space Telescope and eight times the light-gathering area of any other optical telescope. If it’s completed on schedule, it will have “first light” in 2022 and could be the first of the next generation of huge ground-based telescopes. The others are the European Extremely Large Telescope (E-ELT, led by the European Southern Observatory) and the Giant Magellan Telescope (GMT, led by the Carnegie Observatories and other institutions), which will be located in northern Chile.

Every ten years, astronomers and astrophysicists prioritize small-, medium-, and large-scale ground-based and space-based missions, with the aim of advising the federal government’s investment, such as funding through the National Science Foundation (NSF) and NASA. The most recent decadal survey, conducted by the National Academy of Sciences is available online (“New Worlds, New Horizons in Astronomy and Astrophysics“). For the large-scale ground-based telescopes, the NSF will be providing funding for the Large Synoptic Survey Telescope (which I’ve written about here before) and the TMT. There had been debates about funding either the TMT or the GMT, but not both, though a couple years ago GMT scientists opted out of federal funding (see this Science article). NASA is focusing on space-based missions such as the upcoming James Webb Space Telescope (JWST) and Wide-Field InfraRed Survey Telescope (WFIRST), which will be launched later this decade.

Is “Data-driven Science” an Oxymoron?

In recent years, we’ve been repeatedly told that we’re living and working in an era of Big Data (and Big Science). We’ve heard how Nate Silver and others are revolutionizing how we analyze and interpret data. In many areas of science and in many aspects of life, for that matter, we’re obtaining collections of datasets so large and complex that it becomes necessary to change our traditional analysis methods. Since the volume, velocity, and variety of data are rapidly increasing, it is increasingly important to develop and apply appropriate techniques and statistical tools.

However, is it true that Big Data changes everything? Much can be gained from proper data analysis and from “data-driven science.” For example, the popular story about Billy Beane and Moneyball shows how Big Data and statistics transformed how baseball teams are assessed. But I’d like to point out some misconceptions and dangers of the concept of data-driven science.

Governments, corporations, and employers are already collecting (too?) much of our precious, precious data and expending massive effort to study it. We might worry about this because of concerns of privacy, but we should also worry about what might happen to analyses that are excessively focused on the data. There are questions that we should be asking more often: Who’s collecting the data? Which data and why? Which analysis tools and why? What are their assumptions and priors? My main point will be that the results from computer codes churning through massive datasets are not objective or impartial, and the data don’t inevitably drive us to a particular conclusion. This is why the concept of “data-driven” anything is misleading.

Let’s take a look at a few examples of data-driven analysis that have been in the news lately…

Nate Silver and FiveThirtyEight

Many media organizations are about something, and they use a variety of methods to study it. In a sense, FiveThirtyEight isn’t really about something. (If I wanted to read about nothing, I’d check out ClickHole and be more entertained.) Instead, FiveThirtyEight is about their method, which they call “data journalism” and by which they mean “statistical analysis, but also data visualization, computer programming and data-literate reporting.”

I’m exaggerating though. They cover broad topics related to politics, economics, science, life, and sports. They’ve had considerable success making probabilistic predictions about baseball, March Madness, and World Cup teams and in packaging statistics in a slick and easy-to-understand way. They also successfully predicted the 2012 US elections on a state-by-state basis, though they stuck to the usual script of treating it as a horse race: one team against another. Their statistical methods are sometimes “black boxes”, but if you look, they’ll often provide additional information about them. Their statistics are usually sound, but maybe they should be more forthcoming about the assumptions and uncertainties involved.

Their “life” section basically allows them to cover whatever they think is the popular meme of the day, which in my opinion isn’t what a non-tabloid media organization should be focused on doing. This section includes their “burrito competition,” which could be a fun idea but their bracket apparently neglected sparsely-populated states like New Mexico and Arizona, where the burrito historically originated.

The “economics” section has faced substantial criticism. For example, Ben Casselman’s article, “Typical minimum-wage earners aren’t poor, but they’re not quite middle class,” was criticized in Al-Jazeera America for being based on a single set of data plotting minimum-wage workers by household income. He doesn’t consider the controversial issue of how to measure poverty or the decrease in the real value of the minimum wage, and he ends up undermining the case for raising the minimum wage. Another article about corporate cash hoarding was criticized by Paul Krugman and others for jumping to conclusions based on revised data. As Malcolm Harris (an editor at The New Inquiry) writes, “Data extrapolation is a very impressive trick when performed with skill and grace…but it doesn’t come equipped with the humility we should demand from our writers.”

Their “science” section leaves a lot to be desired. For example, they have a piece assessing health news reports in which the author (Jeff Leek) uses Bayesian priors based on an “initial gut feeling” before assigning numbers to a checklist. As pointed out in this Columbia Journalism Review article, “plenty of people have already produced such checklists—only more thoughtfully and with greater detail…Not to mention that interpreting the value of an individual scientific study is difficult—a subject worthy of much more description and analysis than FiveThirtyEight provides.” And then there was the brouhaha about Roger Pielke, whose writings about the effects of climate change I criticized before, and who’s now left the organization.

Maybe Nate Silver should leave these topics to the experts and stick to covering sports? He does that really well.

Thomas Piketty on Inequality

Let’s briefly consider two more examples. You’ve probably heard about the popular and best-selling analysis of data-driven economics in Thomas Piketty’s magnum opus, Capital in the Twenty-first Century. It’s a long but well-written book in which Piketty makes convincing arguments about how income and wealth inequality are worsening in the United States, France, and other developed countries. (See these reviews in the NY Review of Books and Slate.) It’s influential because of its excellent and systematic use of statistics and data analysis, because of the neglect of wealth inequality by other mainstream economists, and of course because of the economic recession and the dominance of the top 1 percent.

Piketty has been criticized by conservatives, and he has successfully responded to these critics. His proposal for a progressive tax on wealth has also been criticized by some. Perhaps the book’s popularity and the clearly widespread and underestimated economic inequality will result in more discussion and consideration of this and other proposals.

I want to make a different point though. As impressive as Piketty’s book is, we should be careful about how we interpret it and his ideas for reducing inequality. For example, as argued by Russell Jacoby, unlike Marx in Das Kapital, Piketty takes the current system of capitalism for granted. Equality “as an idea and demand also contains an element of resignation; it accepts society, but wants to balance out the goods or privileges…Equalizing pollution pollutes equally, but does not end pollution.” While Piketty’s ideas for reducing economic extremes could be very helpful, they don’t “address a redundant labor force, alienating work, or a society driven by money and profit.” You may or may not agree with Piketty’s starting point—and you do have to start somewhere—but it’s important to keep it in mind when interpreting the results.

As before, just because something is “data-driven” doesn’t mean that the data, analysis, or conclusions can’t be questioned. We always need to be grounded in data, but we need to be careful about how we interpret analyses of them.

HealthMap on Ebola

Harvard’s HealthMap gained attention for using algorithms to detect the beginning of the Ebola outbreak in Africa before the World Health Organization did. Is that a big success for “big data”? Not so, according to Foreign Policy. “It’s an inspirational story that is a common refrain in the ‘big data’ world—sophisticated computer algorithms sift through millions of data points and divine hidden patterns indicating a previously unrecognized outbreak that was then used to alert unsuspecting health authorities and government officials…The problem is that this story isn’t quite true.” By the time HealthMap monitored its very first report, the Guinean government had actually already announced the outbreak and notified the WHO. Part of the problem is that it was published in French, while most monitoring systems today emphasize English-language material.

This seems to be another case of people jumping to conclusions to fit a popular narrative.

What does all this mean for Science?

Are “big data” and “data-driven” science more than just buzzwords? Maybe. But as these examples show, we have to be careful when utilizing them and interpreting their results. When some people conduct various kinds of statistical analyses and data mining, they act as if the data speak for themselves. So their conclusions must be indisputable! But the data never speak for themselves. We scientists and analysts are not simply going around performing induction, collecting every relevant datum around us, and cranking the data through machines.

Every analysis has some assumptions. We all make assumptions about which data to collect, which way to analyze them, which models to use, how to reduce our biases, and how to assess our uncertainties. All machine learning methods, including “unsupervised” learning (in which one tries to find hidden patterns in data), require assumptions. The data definitely do not “drive” one to a particular conclusion. When we interpret someone’s analysis, we may or may not agree with their assumptions, but we should know what they are. And any analyst who does not clearly disclose their assumptions and uncertainties is doing everyone a disservice. Scientists are human and make mistakes, but these are obvious mistakes to avoid. Although objective data-driven science might not be possible, as long as we’re clear about how we choose our data and models and how we analyze them, then it’s still possible to make progress and reach a consensus on some issues and ask new questions on others.

Astrophysicists Gather in Aspen to Study the Galaxy-Dark Matter Connection

I just returned from a summer workshop at the Aspen Center for Physics, and I enjoyed it quite a bit! The official title of our workshop is “The Galaxy-Halo Connection Across Cosmic Time.” It was organized by Risa Wechsler (Stanford) and Frank van den Bosch (Yale) and others who unfortunately weren’t able to attend (Andreas Berlind, Jeremy Tinker, and Andrew Zentner). The workshop itself was very well attended by researchers and faculty from a geographically diverse range of institutions, but since it was relatively late in the summer, a few people couldn’t come because of teaching duties.

photo 1

Since I grew up in Colorado, I have to add that Aspen is fine and I understand why it’s popular, but there are many beautiful mountain towns in the Colorado Rockies. Visitors and businesses should spread the love to other places too, like Glenwood Springs, Durango, Leadville, Estes Park, etc… In any case, when we had time off, it was fun to go hiking and biking in the area. For example, I took the following photo after hiking to the top of Electric Peak (elev. 13635 ft., 4155 m), and lower down I’ve included photos of Lost Man Lake (near the continental divide) and the iconic Maroon Bells.

photo 11

The Aspen Center for Physics (ACP) is a great place for working and collaborating with colleagues. As they say on their website, “Set in a friendly, small town of inspiring landscapes, the Center is conducive to deep thinking with few distractions, rules or demands.” As usual, we had a very flexible schedule that allowed for plenty of conversations and discussions outdoors or in our temporary offices. Weather permitting, we had lunch and some meetings outside, and we had many social events too, including lemonade and cookies on Mondays and weekly barbecues. It’s also family-friendly, and many physicists brought their spouses and kids to Aspen too. I’ve attended one ACP summer workshop on a similar theme (“Modeling Galaxy Clustering”) in June 2007, and it too was both fun and productive. Note that the ACP workshop is very different than the Madrid workshop I attended earlier this summer, which had specific goals we were working toward (and I’ll give an update about it later).

This year’s Aspen workshop connected important research on the large-scale structure of the universe, the physics of dark matter halo assembly, the formation and evolution of galaxies, and cosmology. We had informal discussions about the masses and boundaries of dark matter haloes in simulations, ways to quantify the abundances and statistics of galaxies we observe with telescopes and surveys, and how to construct improved models that accurately associate particular classes of galaxies with particular regions of the “cosmic web”—see this Bolshoi simulation image, for example, and the following slice from a galaxy catalog of the Sloan Digital Sky Survey:

rcolor_main_all_z0

While some of these issues have plagued us for years and remain unresolved, there are some subtle issues that have cropped up more recently. We (including me) have successfully modeled the spatial distribution of galaxies in the “local” universe, but now we are trying to distinguish between seemingly inconsistent but similarly successful models. For example, we know that the distribution of dark matter haloes in numerical simulations depends on the mass of the haloes—bigger and more massive systems tend to form in denser environments—as well as on their assembly history (such as their formation time), but these correlations can be quantified in different ways and it’s not clear whether there is a preferred way to associate galaxies with haloes as a function of these properties. For the galaxies themselves, we want to understand why some of them have particular brightnesses, colors, masses, gas contents, star formation rates, and structures and whether they can be explained with particular kinds of dark matter halo models.

IMG_1862

The main purpose of these workshops is to facilitate collaborations and inspire new ideas about (astro)physical issues, and it looks like we accomplished that. The previous workshop I attended helped me to finish a paper on analyzing the observed spatial distribution of red and blue galaxies with dark matter halo models (arXiv:0805.0310), and I’m sure that my current projects are already benefiting from this summer’s workshop. We seem to be gradually learning more about the relations between galaxy formation and dark matter, and my colleagues and I will have new questions to ask the next time we return to the Rockies.

Finally, here are those Maroon Bells you’ve been waiting for:

IMG_1849

Rosetta and the Comet

The title sounds like I’ll tell you a fable or short story or something. This is neither of those things, but it is quite a story! I’m not personally involved in the Rosetta mission, though I’ll do my best to tell you about it and what’s unique and exciting about this. (For you fellow astrophysicists reading this, if I’ve missed or misstated anything, please let me know.) And if you’d like more information and updates, I recommend looking at Emily Lakdawalla‘s blog posts on ESA and Phil Plait‘s blog on Slate. If you’re interested in the history and importance of comets (and about how “we’re made of starstuff”), check out Carl Sagan and Ann Druyan’s book, Comet.

Rosetta, the €1.3 billion flagship space probe (see below) of the European Space Agency (NASA’s European counterpart) has chosen to accept an ambitious mission: to chase down, intercept, and orbit a distant comet, and then send the lander Philae to “harpoon” itself to the surface and engage in a detailed analysis. Rosetta is obviously named after the Rosetta Stone in Egyptian history, and Philae is named after an island in the Nile. Rosetta and Philae are hip spacecraft: they even have their own Twitter accounts—@ESA_Rosetta and @Philae2014, respectively. They should be careful when examining the comet below its surface, because if it’s anything like Star Trek, they could find an ancient alien archive in the center! (Fans of the “Masks” episode will know what I’m talking about.)

Science Magazine

Comets are literally pretty cool. They’re clumps of ice, dust, and organic materials with tails that are hurtling through space. What is this comet Rosetta’s pursuing? It’s known as Comet 67P/Churyumov-Gerasimenko, named after a pair of Ukrainian astronomers who discovered it in 1969. 67P/C-G looks like a mere blob from a distance, but it’s 4km in diameter and lopsided with two barely-attached lobes that make it look like a rubber duck from certain angles. “It may be an object we call a contact binary which was created when two smaller comets merged after a low-velocity collision,” said mission scientist Matt Taylor, or it may have once been a spherical object that lost much of its volatile material after encounters with the sun. It also has plumes of dust and gas (from sublimated ices) erupting from the surface, which has a temperature of about -70 C. (The montage of images below are courtesy of ESA/Rosetta/NAVCAM/Emily Lakdawalla.)

20140806_NavCam_animation_6_August_selection_stack

rosetta_osiris_aug72014.jpg.CROP.original-original

Comets tell us about our past, since they’re thought to have formed in the cold of the outer solar system 4.6 billion years ago. They also yield information about the formation of the solar system and about the role of comets in delivering water and organic material to Earth in its history—possibly influencing the origin of life here. Cometary impacts are known to have been much more common in the early solar system than today. There may be billions of these dirty snowballs (or icy dustballs) orbiting the sun, and thousands of them have been observed. Prior to Rosetta, three comets have been analyzed by space probes: Halley’s comet by ESA’s Giotto in 1986, Comet Wild 2 by NASA’s Stardust in 2004, and Comet Tempel 1 by NASA’s Deep Impact, which slammed into it in 2005. The diagram below (courtesy of ESA/Science journal) shows the orbits of Rosetta and 67P/C-G. The comet has been traveling at speeds up to 135,000 km/hr, and Rosetta had to use flybys of the Earth and Mars to maneuver onto the same orbital path. Rosetta will be the first mission ever to orbit and land on a comet, so this is really an historic moment in space exploration.

F4.large

On 11 November, Rosetta will be in a position to eject the Philae lander from only a couple kilometers away. Philae is 100 kg, box shaped with three legs and numerous instruments for experiments (see below), and was provided by the German Aerospace Research Institute (DLR). NASA scientists talk of the “7 minutes of terror” as the Curiosity rover descended to Mars, but Philae’s descent will take hours. Note that 67P is so small and gravity is so weak that the lander would likely bounce off, which is why it needs the harpoons as well as screws on the legs to bolt it to the surface. If the landing is successful—let’s cross our fingers that it is—it will perform many interesting experiments with its instruments. For example, CONSERT will use radio waves to construct a 3D model of the nucleus, Ptolemy will measure the abundance of water and heavy water, and COSAC will look for long-chain organic molecules and amino acids. COSAC will also detect the chirality of the molecules and maybe determine whether amino acids are always left-handed like the ones on Earth. (“Chirality” means “handedness”. I think the only other time I heard the term was for the spin statistics of spiral galaxies.)

Philae_s_instruments_black_background_node_full_image_2

Let’s hope for Rosetta’s and Philae’s success! I’ll update you on this blog when I hear more information.

Extreme Space Weather Event #23072012

You may have seen some dramatic headlines in the news last week: “‘Extreme solar storm’ could have pulled the plug on Earth” (Guardian); “Solar ‘superstorm’ just missed Earth in 2012” (CBS); “How a solar storm two years ago nearly caused a catastrophe on Earth” (Washington Post blog). Also see this Physics Today article, which was published online today and reviewed the press attention to the event.

Though journalists and editors often write hyperbolic headlines, the danger from solar storms is very real, though extreme ones are as rare as massive earthquakes. When you think of solar flares and eruptions threatening humans, it may evoke Stanislaw Lem’s Solaris or the Doctor Who episode 42, but at least our sun isn’t sentient (as far as we know)!

A less threatening solar storm on the Sun

The solar storm in question occurred two years ago on 23 July 2012, and the media reported on it following a NASA public-information release and accompanying four-minute YouTube video (see below). It seems that those of us who live on Earth and use electronic technology were lucky that this was a near miss. The threat of solar storms is also relevant to “space security”, which I wrote about in a previous post.

The paper itself was published last fall in the Space Weather journal by Daniel Baker, of the Laboratory for Atmospheric and Space Physics at the University of Colorado, and six colleagues from NASA, Catholic University, and the University of New Hampshire. Its full title is “A major solar eruptive event in July 2012: Defining extreme space weather scenarios,” and here is their abstract (abridged):

A key goal for space weather studies is to define severe and extreme conditions that might plausibly afflict human technology. On 23 July 2012, solar active region 1520 (141°W heliographic longitude) gave rise to a powerful coronal mass ejection (CME) with an initial speed that was determined to be 2500 ± 500 km/s [5.6 million miles/hr!]… In this paper, we address the question of what would have happened if this powerful interplanetary event had been Earthward directed. Using a well-proven geomagnetic storm forecast model, we find that the 23–24 July event would certainly have produced a geomagnetic storm that was comparable to the largest events of the twentieth century…This finding has far reaching implications because it demonstrates that extreme space weather conditions such as those during March of 1989 or September of 1859 can happen even during a modest solar activity cycle such as the one presently underway. We argue that this extreme event should immediately be employed by the space weather community to model severe space weather effects on technological systems such as the electric power grid.

The solar storm missed the Earth but hit NASA’s STEREO-A spacecraft, which was safely outside the Earth’s magnetosphere and was able to measure and observe the approaching CME, a billion-ton cloud of magnetized plasma. “I have come away from our recent studies more convinced than ever that Earth and its inhabitants were incredibly fortunate that the 2012 eruption happened when it did,” says Baker. “If the eruption had occurred only one week earlier, Earth would have been in the line of fire.” According to the simulations in their follow-up paper by Chigomezyo Ngwira et al., had the 2012 CME hit the Earth, it could have produced comparable or larger geomagnetically induced electric fields to those produced by previously observed Earth-directed events and would have put electrical power grids, global navigation systems, orbiting satellites, etc. at risk.

Pete Riley, a physicist at Predictive Science Inc., published a paper in 2012 in the same journal entitled “On the probability of occurrence of extreme space weather events.” He analyzed historical records of solar storms, and by extrapolating the frequency of ordinary storms, he calculated the odds that a Carrington-class storm (which occurred in 1859) would hit Earth in the next ten years is between 8.5 and 12%!

NASA has calculated that the cost of the 2012 CME hitting the Earth would have been twenty times the devastation caused by hurricane Katrina—on the order of $2tn. The storm would have begun with a solar flare, which itself can cause radio blackouts and GPS navigation failures, and then it would have been followed by the CME a few minutes later, potentially causing widespread havoc with global technological infrastructure. Anything that uses electricity, including water supplies, hospital equipment, and radio and television broadcasts could be shut down. How do we prepare as a society for an event like that?

AAAS Symposium in Feb. 2015: Cutting-Edge Research with 1 Million Citizen Scientists

[This is an expanded version of a post I wrote for the Galaxy Zoo blog.]

Some colleagues and I successfully proposed for a symposium on citizen science at the annual meeting of the American Association for the Advancement of Science (AAAS) in San Jose, CA in February 2015. (The AAAS is the world’s largest scientific society and is the publisher of the Science journal.) Our session will be titled “Citizen Science from the Zooniverse: Cutting-Edge Research with 1 Million Scientists.” It refers to the more than one million volunteers participating in a variety of citizen science projects. This milestone was reached in February, and the Guardian and other news outlets reported on it.

AM15-logo-desktop

“Citizen science” (CS) involves public participation and engagement in scientific research in a way that educates the participants, makes the research more democratic, and makes it possible to perform tasks that a small number of researchers could not accomplish alone. (See my recent post on new developments in citizen science.)

Zooniverse-300x110

The Zooniverse began with Galaxy Zoo, which recently celebrated its seventh anniversary, and which turned out to be incredibly popular. (I’ve been heavily involved in Galaxy Zoo since 2008.) Galaxy Zoo participants produced numerous visual classifications of hundreds of thousands of galaxies, yielding excellent datasets for statistical analyses and for identifying rare objects. Its success led to the development of a variety of CS projects coordinated by the Zooniverse in a diverse range of fields. For example, they include: Snapshot Serengeti, where people classify different animals caught in millions of camera trap images; Cell Slider, where they classify images of cancerous and ordinary cells and contribute to cancer research; Old Weather, where participants transcribe weather data from log books of Arctic exploration and research ships at sea between 1850 and 1950, thus contributing to climate model projections; and Whale FM, where they categorize the recorded sounds made by killer and pilot whales. And of course, in addition to Galaxy Zoo, there are numerous astronomy-related projects, such as Disk Detective, Planet Hunters, the Milky Way Project, and Space Warps.

Screen+Shot+2012-12-11+at+2.03.18+PM

We haven’t confirmed the speakers for our AAAS session yet, but we plan to have six speakers from the US and UK who will introduce and present results from the Zooniverse, Galaxy Zoo, Snapshot Serengeti, Old Weather, Cell Slider, and Space Warps. I’m sure it will be exciting and we’re all looking forward to it! I’m also looking forward to the meeting of the Citizen Science Association, which will be a “pre-conference” preceding the AAAS meeting.

Comparing Models of Dark Matter and Galaxy Formation

I just got back from the “nIFTy” Cosmology workshop, which took place at the IFT (Instituto de Física Teórica) of the Universidad Autonoma de Madrid. It was organized primarily by Alexander Knebe, Frazer Pearce, Gustavo Yepes, and Francisco Prada. As usual, it was a very international workshop, which could’ve been interesting in the context of the World Cup, except that most of the participants’ teams had already been eliminated before the workshop began! In spite of Spain’s early exit, the stadium of Real Madrid (which I visited on a day of sightseeing) was nonetheless a popular tourist spot. I also visited the Prado museum, which had an interesting painting by Rubens involving the Milky Way.

IMG_1760

This was one of a series of workshops and comparison projects, and I was involved in some of the previous ones as well. For example, following a conference in 2009, some colleagues and I compared measures of galaxy environment—which are supposed to quantify to what extent galaxy properties are affected by whether they’re in clustered or less dense regions—using a galaxy catalog produced by my model. (The overview paper is here.) I also participated in a project comparing the clustering properties of dark matter substructures identified with different methods (here is the paper). Then last year, colleagues and I participated in a workshop in Nottingham, in which we modeled galaxy cluster catalogs that were then analyzed by different methods for estimating masses, richnesses and membership in these clusters. (See this paper for details.)

This time, we had an ambitious three week workshop in which each week’s program is sort of related to the other weeks. During the first week, we compared codes of different hydrodynamical simulations, including the code used by the popular Illustris simulation, while focusing on simulated galaxy clusters. In week #2, we compared a variety of models of galaxy formation as well as models of the spatial distributions and dynamics of dark matter haloes. Then in week #3, we’re continuing the work from that Nottingham workshop I mentioned above. (All of these topics are also related to those of the conference in Xi’an that I attended a couple months ago, and a couple other attendees were here as well.)

The motivation of these workshops and comparison workshops is to compare popular models, simulations, and observational methods in order to better understand our points of agreement and disagreement and to investigate our systematic uncertainties and assumptions that are often ignored or not taken sufficiently seriously. (This is also relevant to my posts on scientific consensus and so-called paradigm shifts.)

Last week, I would say that we had surprisingly strong disagreement and interesting debates about dark matter halo masses, which are the primary drivers of environmental effects on galaxies; about the treatment of tidally stripped substructures and ‘orphan’ satellite galaxies in models; and various assumptions about ‘merger trees’ (see also this previous workshop.) These debates highlight the importance of such comparisons: they’re very useful for the scientific community and for science in general. I’ve found that the scatter among different models and methods often turns out to be far larger than assumed, with important implications. For example, before we can learn about how a galaxy’s environment affects its evolution, we need to figure out how to properly characterize its environment, but it turns out that this is difficult to do precisely. Before we can learn about the physical mechanisms involved in galaxy formation, we need to better understand how accurate our models’ assumptions might be, especially assumptions about how galaxy formation processes are associated with evolving dark matter haloes. Considering the many systematic uncertainties involved, it seems that these models can’t be used reliably for “precision cosmology” either.

A few thoughts on the peer-review process

How does the peer-review process work? How do scientists critically review each others’ work to ensure that the most robust results and thorough analyses are published and that only the best research proposals are awarded grants? How do scientists’ papers and articles change between submission and publication? It’s a process that has advantages and shortcomings, and maybe it’s time for us as a community to try to improve it. (I’m not the only person who’s written about this stuff, and you may be interested in other scientists’ perspectives, such as Sarah Kendrew, Andrew Pontzen, and Kelle Cruz. This blog on Guardian has interesting relating posts too.)

For us scientists, writing about our scientific research and writing proposals for planned research is a critically important aspect of the job. The ubiquity of the “publish or perish” maxim highlights its significance for advancing one’s career. Publishing research and evaluating and responding to others’ publications are crucial for scientists to try to debate and eventually reach a consensus on particular issues. We want to make sure that we are learning something new and converging on important ideas and questions rather than being led astray by poorly vetted results. Therefore, we want to make the peer-review process as effective and efficient as possible.

Female researcher taking notes

For readers unfamiliar with the process, it basically goes like this: scientist Dr. A and her colleagues are working on a research project. They obtain a preliminary result—which may be a detection of something, the development of a new model, the refutation of a previously held assumption, etc.—which they test and hone until they have something they deem publishable. Then Dr. A’s group write a paper explaining the research they conducted (so that it potentially could be repeated by an independent group) and lay out their arguments and conclusions while putting them in the context of other scientists’ work. If they can put together a sufficiently high-quality paper, they then submit it to a journal. An independent “referee” then reviews the work and writes a report. (Science is an international enterprise, so like the World Cup, referees can come from around the world.) The paper goes through a few or many iterations between the authors and referee(s) until it is either rejected or accepted for publication, and these interactions may be facilitated by an editor. At that point, the paper is typeset and the authors and copy editors check that the proof is accurate, and then a couple months later the paper is published online and in print.

(In my fields, people commonly publish their work in the Astrophysical Journal, Monthly Notices of the Royal Astronomical Society, Astronomy & Astrophysics, Physical Review D, and many others, including Nature, where they publish the most controversial and provocative results, which sometimes turn out later to be wrong.)

In general, this system works rather well, but there are inherent problems to it. For example, authors are dependent on the whim of a single referee, and some referees do not spend enough time and effort when reviewing papers and writing reports for authors. On the other hand, sometimes authors do not write sufficiently clearly or do not sufficiently double-check all of their work before submitting a paper. Also, sometimes great papers can be delayed for long periods because of nitpicking or unpunctual referees, while other papers may appear about they were not subjected to much critical scrutiny, though these things are often subjective and depend on one’s perspective.

There are other questions that are worthwhile discussing and considering. For example, how should a scientific editor select an appropriate referee to review a particular paper? When should a referee choose to remain anonymous or not? How should authors, referees, and editors deal with language barriers? What criteria should we use for accepting or rejecting a paper, and in a dispute, when and in what way should an editor intervene?

Some authors post their papers online for the community on arXiv.org (the astronomy page is here) before publication while others wait until a paper is in press. It’s important to get results out to the community, especially for junior scientists early in their careers. The early online posting of papers can yield interesting discussions and helpful feedback which can improve the quality of a paper before it is published. On the other hand, some of these discussions can be premature; some papers evolve significantly and quantitative and qualitative conclusions can change while a paper is being revised in the referee process. It is easy to jump to conclusions or to waste time with a paper that still needs further revision and analysis or maybe even is fundamentally flawed. Of course, this can also be said about some published papers as well.

implications for science journalists

These issues are also important to consider when scientists and journalists communicate and when journalists write or present scientific achievements or discoveries. Everyone is pressed for time, and journalists are under pressure to write copy within strict deadlines, but it’s very important to critically review the relevant science whenever possible. Also, in my opinion, it’s a good idea for journalists to talk to a scientists colleagues and competitors to try to learn about multiple perspectives and to determine which issues might be contentious. We should also keep in mind that achievements and discoveries are rarely accomplished by a single person but by a collaboration and were made possible by the work of other scientists upon which they’ve built. (Isaac Newton once said, “If I have seen further it is by standing on the shoulders of giants.”)

Furthermore, while one might tweet about a new unpublished scientific result, for more investigative journalism, it’s better of course to avoid rushing the analysis. We all like to learn about and comment on that new scientific study that everyone’s talking about, but unfortunately people will generally pay most attention to what they hear first rather than retractions or corrections that might be issued later on. We’re living in a fast-paced society and there is often demand for a quick turnaround for “content”, but the scientific enterprise goes on for generations—a much longer time-scale than the meme of the week.

improving the peer-review process

And how can this peer-review system be improved? I’ve heard a variety of suggestions, some of which are probably worthwhile to experiment with. We could consider having more than one person review papers, with the extra referees providing an advisory role. We could consider paying scientists for fulfilling their refereeing duties. We could make it possible for the scientific to comment on papers on the arXiv (or VoxCharta or elsewhere), thus making these archives of papers and proceedings more like social media (or rather like a “social medium”, but I never hear anyone say that).

Another related issue is that of “open-access journals” as opposed to journals that have paywalls making papers inaccessible to people. Public access to scientific research is very important, and there are many advantages of promoting open journals and of scientists using them more often. Scientists (including me) should think more seriously about how we can move in that direction.