Category Archives: Science And Society

The Hubble Group

So the big news today is that they’ve named the supercluster we live in:

Scientists previously placed the Milky Way in the Virgo Supercluster, but under Tully and colleagues’ definition, this region becomes just an appendage of the much larger Laniakea, which is 160 million parsecs (520 million light years) across and contains the mass of 100 million billion Suns.

Which kicked off this Twitter exchange between me and Lee Billings.

Accordingly, I propose that we rename the Local Group the Hubble Group, in honor of its namer, and making it consistent with the other names. I will henceforth call it that. If anyone asks, I’ll explain.

Low-Carb Diets

Another well-designed study shows its benefits.

[Update a while later]

Here‘s the original NYT piece.

[Afternoon update]

On an email list, I responded to a friend who was interested, but disgusted by eating fat (doesn’t like butter on anything except potatoes, cuts it off steak, etc.)

I’m not big on just eating fat per se myself, but I now take fat I cut off and render it (tallow for beef, lard for pork, schmaltz for chicken) and add it to other things (like a can of “fat-free” baked beans yesterday), or fry eggs or other things in it. For instance, when you cook bacon, you’re actually rendering the lard (the bacon grease). When I render beef suet I get what I call “beef bacon,” tasty bits of crunchy protein, along with the tallow. I’ve quit using seed or vegetable oil for deep frying and switched to lard or tallow (the latter is what used to make McDonalds fries taste good, until they got mau maued into switching to other oils, and it made a lot of economic sense given that they own cattle ranches and generate so much of it in cooking the burgers). Also, eat crispy chicken skin (the chicken version of bacon). There are a lot of non-disgusting ways to increase your fat intake, while improving food taste/mouth feel.

I’d like to start a social media campaign to get McDonalds to go back to tallow for fries (yes, I know that potatoes are problematic, but if you’re going to eat them, at least fry them in a delicious and healthy fat). It might even knock down the prices.

In Which I Disagree With @Instapundit

I agree that neurosuspension is better than nothing, but I disagree that whole body is for suckers. This is a topic that’s been going on for years in cryonics discussions.

We simply don’t know how much of our identity is in our body, as opposed to simply our brains. For instance, I suspect that there is a lot of distributed motor intelligence in athletes and musicians — when I play an instrument (or for that matter, simply type on a keyboard) I have a sense that my hands aren’t being directly controlled by the brain, but are rather receiving higher-level commands issued by the brain that are implemented at a lower level, based on local memory. I don’t know that to be the case, but if you can afford to keep the whole body, it might end up being worth not having to reacquire old skills.

What Makes Us Fat

Here’s a radical idea: Let’s do some actual scientific research:

…much of what we think we know about nutrition is based on observational studies, a mainstay of major research initiatives like the Nurses’ Health Study, which followed more than 120,000 women across the US for three decades. Such studies look for associations between the foods that subjects claim to eat and the diseases they later develop. The problem, as Taubes sees it, is that observational studies may show a link between a food or nutrient and a disease but tell us nothing about whether the food or nutrient is actually causing the disease. It’s a classic blunder of confusing correlation with causation—and failing to test conclusions with controlled experiments. “Good scientists will approach new results like they’re buying a used car,” he says. “When the salesman tells you it’s a great car, you don’t take his word for it. You get it checked out.”

NuSI’s starting assumption, in other words, is that bad science got us into the state of confusion and ignorance we’re in. Now Taubes and Attia want to see if good science can get us out.

What a concept.

The SpaceX “Test Failure”

This isn’t new — I wrote it on Saturday at Ricochet, but it’s behind the paywall, so I thought I’d repost it here:

So the big news yesterday for people in the space business was that SpaceX finally lost an experimental test vehicle in its program to make its vehicles reusable (crucial to dramatically reducing costs to the point necessary to achieve its corporate goal of opening up the solar system). Some criticized it as a “failure” of the company. This is nonsense.

People need to understand that the purpose of an engineering test is to learn something. As I said on Twitter last night, the only “failed” test is one in which you didn’t get the information you were seeking. Losing hardware in a test is not a “test failure,” per se:

For example, consider the crash testing of cars, in which a successful test results in a wrecked car, but tells you what its weak points are so that you can improve the design, and the only test “failure” you can have is if the car fails to hit the barrier. In SpaceX’s case, the goal of the test wasn’t to destroy the vehicle per se, but they were fully aware that this could be an outcome. In fact, Gwynne Shotwell, the company president, said last year that she was a little disappointed that they retired the first test vehicle, Grasshopper, because the fact that they didn’t lose it in a test meant that they weren’t pushing the envelope hard enough.

Had it failed to deliver a payload of a paying customer to its designated destination, that could have rightfully been called a “failure” and the company justly criticized for it. But when an experimental vehicle crashes during a flight test, that’s called “flight test.”

SpaceX probably knows, but it hasn’t yet been reported what the cause was. The most common cause of failure in rockets is failure of stage separation, which doesn’t apply in this case, of course, since it is a single-stage test vehicle. Also, it could be an engine failure, but they have a lot of experience with their engines and hardware in general, so that’s an unlikely cause.

For this kind of vehicle, it’s really a test of the flight-control system, which is not only the computers, and sensors, and software, but the actuators that steer it. It’s possible that they had an actuator or engine-gimbal hardware failure, but they’ve had lots of test flights and never run into that problem. My guess (and it’s only that), based on viewing the video, is that they were pushing the vehicle beyond its capabilities to do something (perhaps translate, i.e., go sideways, while also descending or changing attitude) that they’d never attempted before, and it lost control (like an aircraft in a tailspin) without ability to regain it.

Once you lose control the decision to terminate flight comes pretty quickly, because bad things can happen very quickly after that. If they hadn’t been able to do the flight termination, and if it had resulted in unexpected damage on the ground, that would have been grounds for criticism, but the vehicle was safed exactly as planned, under FAA guidance and supervision.

Other than losing the vehicle, this flight was indeed a great success by the criteria of providing the information desired. At least two people from SpaceX, including Lars Blackmore, the lead of their entry, recovery and landing team, tweeted last night that they got “lots” of data.

Presumably in this case, if my theory is correct, they now understand the limits of the flight-control system. It may be that they will be able to ground simulate the failure, and tweak the software to avoid it in the future.

Was this a setback for SpaceX? Someone on Fox referred to the test last night with “A small rung on a long ladder to Mars broke on Friday, when a rocket test in Texas ended in a midair ball of fire.”

Jeff Foust called it that in his piece at the NewSpace Journal, and Jeff is a very smart guy, but I think he’s wrong, or at least, it’s not obvious that it is. In fact, when I asked him, Lars tweeted that he didn’t necessarily consider it one:

I would consider something a setback if it actually results in a delay of a critical program milestone. I think they have another test vehicle (that they’ll be flying out of New Mexico soon to do higher-altitude testing), and if they need yet another for McGregor, given their production capacity, they could probably pull one off the line and modify it pretty quickly. They’ve found something to fix in the next test vehicle (and possibly, though not necessarily, depending on what caused it) in an operational one. Also, in a sense, they’re no longer test-flight virgins, and may even be more bold going forward.

It’s certainly not going to affect their future launches (most importantly, next week’s), since it’s a side experimental program on which none of their current customers are dependent. So no, I don’t think it was much of a setback, if any.

On the other hand, I think that Blue Origin’s loss of its test vehicle three years ago may have been a setback, because they haven’t flown anything since (as far as I know). Unlike yesterday’s event, it may have been a totally unexpected, “back to the drawing board” thing. But I have no inside knowledge.

In addition to the general point of the difference between a hardware loss in a test and failure in operations, there is another point to consider here. While you expect problems in flight test of any new vehicle, VTVL (vertical take-off, vertical landing) types are particularly susceptible, not having wings to come home on if there’s a failure (though some use chutes as backup). I don’t think there is any serious VTVL company that hasn’t lost a vehicle in flight test, from Blue Origin, to Masten, to Armadillo, to Unreasonable Rocket. As Elon Musk tweeted last night, rockets are tricky:

I’d say that losing a VTVL vehicle in flight test is inevitable, almost a rite of passage, and that SpaceX just finally joined the club.

In fact, this isn’t actually the first experimental vehicle they’ve lost attempting to land it. It’s just the first on land. In a very real sense, every previous attempt to do an ocean recovery of the first stage, after it had completed its primary mission, was a flight test, and a success in that they got great data from each one to build on the next, and “failure” only in the sense that they didn’t succeed in actually recovering them. The company plans one more of these water “recoveries” this fall. Based on history, they have low expectations of getting the vehicle back this time as well, but obviously expect to get critical data needed to start to land actual first stages on land (though the first attempt or two will be on a barge at sea before they have demonstrated the control required for the FAA and the range to allow a flight back to the launch site).

But with each test, regardless of whether they get the vehicle back, they continue on their risky quest, with their own money, to achieve a long-time dream of the space industry (though one that NASA abandoned after the Shuttle), of an end to the wasteful and costly practice of throwing vehicles away. They should be encouraged to continue in their boldness. As I note in my recent book, such boldness, not caution or timidity, is crucial in opening up the harshest frontier humanity has ever faced.

[Sunday-morning update]

OK, not exactly a “setback,” but SpaceX has announced that they will delay Tuesday’s planned AsiaSat 6 satellite launch one day, to Wednesday, to allow them time to review the test results to ensure that the vehicle loss wasn’t caused by something that could affect the flight. “Mission assurance above all.”

[Wednesday-morning update]

They announced yesterday that they’re delaying the launch for several days now, but it’s unclear if it’s related to the vehicle loss on Friday.

The 50-50 Argument

It’s not logical to state that most warming since 1950 has been caused by man (or Mann):

The glaring flaw in their logic is this. If you are trying to attribute warming over a short period, e.g. since 1980, detection requires that you explicitly consider the phasing of multidecadal natural internal variability during that period (e.g. AMO, PDO), not just the spectra over a long time period. Attribution arguments of late 20th century warming have failed to pass the detection threshold which requires accounting for the phasing of the AMO and PDO. It is typically argued that these oscillations go up and down, in net they are a wash. Maybe, but they are NOT a wash when you are considering a period of the order, or shorter than, the multidecadal time scales associated with these oscillations.

Further, in the presence of multidecadal oscillations with a nominal 60-80 yr time scale, convincing attribution requires that you can attribute the variability for more than one 60-80 yr period, preferably back to the mid 19th century. Not being able to address the attribution of change in the early 20th century to my mind precludes any highly confident attribution of change in the late 20th century.

In other words, we shouldn’t and can’t have as much confidence as many would like to push their policy agenda.