Space Is Really Big

But not quite big enough:

In an unprecedented space collision, a commercial Iridium communications satellite and a presumably defunct Russian Cosmos satellite ran into each other Tuesday above northern Siberia, creating a cloud of wreckage, officials said today.

What a mess. At that altitude, the pieces are going to be there a long time, and present a hazard to other LEO satellites. I hope that this isn’t the event that sets off a cascade. I don’t understand why NORAD didn’t predict this. I know they don’t have the elements to a precision necessary to know that they’ll collide, but I would think that they could propagate enough to see that they would come close. And if we had true operationally responsive space capability, we could have sent something up to change the orbit of one of them, if they couldn’t do it themselves. This is the price we pay for not being a truly spacefaring civilization, despite the billions wasted over the past decades.

[Update in the evening]

Clark Lindsey has more links, and thoughts.

[Thursday morning update]

The Orlando Sentinel was somewhat prescient about this story, having run a piece on space debris last weekend.

[Mid-morning update]

Clark Lindsey has several more links.

25 thoughts on “Space Is Really Big”

  1. Most of the mission of satellite catalog maintenance and collision avoidance prediction was moved to the Joint Space Operations Center (JSpOC) at Vandenberg AFB last year. NORAD isn’t doing much of that today.

    Even at the JSpOC, the computer equipment is based on the old SPADOC system. It simply doesn’t have the power to do an “all on all” conjuction accessment in a reasonable time. There are something like 12,000 cataloged objects in space and doing a full conjunction assessment on everything isn’t within the current capabilities. The new system (due to go online in a few years) will be able to do it but not the current one. Had they the capability to generate the warning, perhaps Iridium could’ve manuevered their satellite a little to avoid the collision, assuming the satellite was still active. The other satellite was dead.

    These pieces are high enough to take decades to decay. Their inclination essentially crosses just about all other orbits. There are a lot of Iridium satellites alone up there. Let’s hope they don’t start bumping into these new pieces or things will get really bad really fast.

  2. My father used to like to say, “Space is big, space is dark. You still can’t find a place to park.” and then he’d mutter something that sounded like “Burma Shave.” I had to look that one up later in life.

  3. “…in a few years”? What’s the hold up? Budget? Maybe they should have had new computers for JSpOC in the “stimulus” bill.

    You have to understand the military procurement system especially when it comes to a major hardware/software development effort. There is nothing even remotely similar to the JSpOC out there so everything is a custom development. From that perspective, 2012 is considered “near term”. The JSpOC is still a new and evolving organization that’s seriously understaffed. They’re so busy trying to keep up with the day-to-day workload that looking to the future is not a big priority (IMO). They’re good people but grossly overloaded. They’re still trying to hammer out the requirements for the new system, let alone get it developed.

    They have so many stovepiped systems operating at different security classification levels that integration is very difficult. The last time I was at the JSpOC, I saw crewmembers with as many as six different computer systems (each with their own keyboard and mouse) with zero integration between them. People had to mentally integrate the information. We’re working to change that but this is the government we’re talking about. Development takes so long that most of the hardware and software will be obsolete before it becomes operational, but it’ll be a big improvement over what they have today.

  4. And if we had true operationally responsive space capability, we could have sent something up to change the orbit of one of them, if they couldn’t do it themselves.

    Obama’s defense policy calls for a ban on systems that could “interfere with commercial or military satellites.”

    If that happens, we can forget about developing any significant orbital capability.

  5. Pingback: In Other Words
  6. Larry J, that is right fascinating. No wonder I keep coming back to this blog (and spamming it with giant unread treatises on the philosophy of liberty, but that last bit’s Rand’s fault anyway for keeping the front door unlocked, since we’re in the new era of Root Causes and Not My Fault).

    I have a lot of experience solving dynamical systems on computers, and at first glance 12,000 particles under a simple force law isn’t very impressive. It doesn’t sound like it would be hard to find the next collision; in the ordinary way I’d contract to write the program in two weeks, and run it on a Linux Pentium Core 2, expecting each collision prediction to take an hour or two.

    So…what am I getting wrong? Why is the real thing so much harder? The first thing that comes to mind is the precision required. Maybe to predict collisions between two 1m diameter targets over a path length of tens of megameters you need lots and lots of significant digits, which means you’ve got to integrate your equation of motion very carefully and slowly.

    If you know the answer, I’d be interested to hear it. I mean, provided we don’t want to have an another argument about Viagranomics instead. . .

  7. SUrely you don’t have to compare the position of each object to the position of every other every other object, at every time?

    Split both time and space into discrete chunks. I don’t know how many. At one limit the chunk is all of near earth space and some long period of time (the naive way). And the other limit the chunk is maybe 1m x 1m and a mS in time. But I suspect something like 5 km chunks of space and 1 second time intervals would be ok. Or maybe ten times bigger. Or a hundred.

    For each object calculate which box it is in at a particular time. If a box has more than one object in then those objects need to be compared against each other. But almost all boxes will have zero objects in them, and most of the rest will have exactly one object in them.

  8. In the context of dynamic simulations that’s called a cell method, Bruce. Works best in high-density systems. In a very low density system, like this, you want to use neighbor lists instead so you don’t exhaust your memory with giant lists of zeros. You keep a list of the “nearby” particles, and calculate distances to them carefully, ignoring everybody else. From time to time, you calculate all the distances, and update the neighbor lists accordingly.

    But you wouldn’t do either of these things for a mere 12,000 particles in ballistic motion. You’d propagate by collision, solving algebraically (or if necessary numerically) for the time until next collision for each pair of trajectories, then advancing each trajectory ballistically until the next collision between any of them, solve the collision problem, and so on.

    I’m assuming, however, that you can propagate orbits ballistically. Maybe that’s not true, what with bitty fluctuations in drag with the exosphere, solar wind, gravity waves, whatever. I’ve no idea how these things are done.

  9. With a powerful enough computer, this isn’t that hard of a problem to run all-on-all conjunction assessments. I saw a Beuwolf cluster at the Aerospace Corp a few years ago that they said could do an all-on-all conjunction assessment for a 24 hour period in about 15 minutes of run-time. They built their cluster for under $10,000, IIRC. Unfortunately, from what I’ve seen at the JSpOC, they don’t have that kind of computing power. From what I’ve heard, when they moved the space mission out of Cheyenne Mountain to the JSpOC last year, they basically set up the identical stuff they had in the mountain (hardware and software) to run SPADOC 4, a setup that has been in operation for a long time (IIRC, mid 1990s).

    You’re right that these calculations require a lot of precision. Sometimes, the precision may be more than the current system can handle. Most of their routine operations use General Pertubations (GP) propagation. That’s good for a reasonable amount of accuracy (actual value is classified) but if you really want high accuracy, you need Special Pertubations (SP)processing. Most of their current work is at the GP level with SP being reserved for high interest items like the ISS. I’m told the new system will use SP for just about everything.

    The problem with calculating orbital conjunctions is complicated by the fact that those ~12,000 pieces are in just about every orbital inclination imaginable with varying degrees of eccentricity and altitudes ranging from LEO to beyond GEO. You can apply some smart filtering to eliminate objects that will never cross (such as LEO satellites with MEO and GEO satellites). However, there are also high eccentricity satellites like the Molynia series whose perigee is a few hundred kilometers high with an apogee out near GEO and an eccentricity over 60 degrees. Those are tough to process because they cross so many other orbits and because we don’t have enough space surveillance sensors to generate really high quality orbital parameters. It’s a non-trivial computational exercise.

  10. What I’ve learned about orbital dynamics jives with larry j’s comments. It ISN’T a simple 2 body kepler problem with a nice solution. You probably want to use the full classified set of wgs-84 grav coefficients, with potential represented as a series of legendre’s or whatever they’re for, and numerically integrate the orbits in addition to collision assessment.

    Plus for longer term you have drag and radiation and tumbling, etc.

  11. Space Operations is short a piddling $10,000 piece of computer hardware? I’m a bit…agog. That’s practically noise, like a potted plant for the secretary’s desk, for even a small company. An academic department wouldn’t even bother writing a grant proposal for that, they’d just scrape it out of the Miscellanous, Graft, and Dean Birthday Parties fund. I mean, it’s less than the cost of hiring a teenager to sort the mail part-time for six months.

    I assume the reason here is bureaucratic folly, or maybe the problem of legacy software and systems. But that’s really weird.

  12. Wouldn’t this be an ideal distributed computing problem, sort of like seti@home or that protein-folding project? Use surplus computing power to handle the bulk of the work and JSpOC can free up resources to get ahead of the problem and perhaps prevent future collisions.

  13. I find this a fascinating computing problem.

    I would be surprised if the $10k estimate is correct for new hardware. I imagine it would be an order of magnitude or more. Perhaps Aerospace had a bunch of excess hardware they made into a cluster for $10k of effort.

    Since JSpOC supports national security interests, it cannot (and should not) rely on an unsecure distributed computing network for critical analyses.

    Personally, I think distributed computing is underutilized, but that’s another topic.

  14. Since JSpOC supports national security interests, it cannot (and should not) rely on an unsecure distributed computing network for critical analyses.

    Yes, there are aspects of JSpOC’s work that require much higher accuracy and reliability than COTS solutions. Some of these aspects are extremely critical such as “are we under attack”. The level of accuracy of Aerospace’s conjunction assessment might not have been good enough to prevent this collision, especially with GP processing.

    JSpOC does do conjunction assessment on many high interest satellites but the Iridium constellation wasn’t included.

  15. A topic that is close to my heart…

    Certainly this collision points out, again, a place where an easy extrapolation would have shown us a mission that would add value to the world’s space activities.

    A bit of history – the AF has been doing Computation of Miss Between Orbits (COMBO) for years. The old Space Defense Center in the Cheyenne Mountain Complex did it on the old 496L computers. Then the 427M took over (in 1980) as they renamed the center the Space Computational Center. I believe that Carlos Noriega was a part of the team that finally shut down the old 427M in about 1992.

    The point of that is that the acquisition and verification of a hardware/software system that processes military data is a long and difficult process. The 496L – 427M switch was very difficult, took a long time, etc. In fact we had switched over, but then management was not confident with the 427M and made us switch back just for the re-entry of the Skylab vehicle. So getting a current system with modern features and performance is a multi year effort.

    That said, we always had an off line system on Peterson AFB that did a lot of software development, analysis of breakups, etc and that did not have anywhere near the validation requirements. So – even if the JSpOC did not have the performance to screen for collisions at all altitudes, you could save a set of orbital vectors and run them (so you would miss a few classified vectors, so what) on a higher performance system.

    It does seem now that the AF might realize that they could provide a valuable service by doing a coarse screen on an off line system, and then doing the finer analysis on the more current element sets using Special Perturbation (much more accurate) processing.

    This would produce a set of objects that could be more closely watched – or active satellites such as Iridium could probably be nudged towards a lower probability of collision orbit.

  16. Interesting stuff.

    If this did cause some type of cascading event it would certainly cause quite a disruption to our current satellite dependent services. A silver lining to this is it might be the enabling market we are looking for to spur on private space — debris collection.

  17. So what is the difference between GP and SP processing? Are we talking about much more accurate input data, e.g. the location and velocity vectors of the satellite at epoch Foo, or are we talking a much more accurate description of the Earth’s gravitational field?

  18. Carl Pham asked about General Perturbations processing and Special Perturbations. The difference is that General Perturbations takes the two body model (a satellite orbiting around a central body such as the Earth) and then computes the perturbations due to drag, oblateness of the Earth, atmospheric effects, etc. It uses “less accuracy” as a trade off for speed. It produces a two line element set or a vector that is accurate enough for a radar tracker – those broad beams don’t need a lot of accuracy.
    SP uses more accurate models (for the atmosphere of the Earth for instance) and takes much more time. We used them for targetting orbital interceptors, calculating re-entry predictions, doing Computation Of Miss Between Objects (COMBO).
    Now I must admit that it has been several years since I did a GP or SP differential correction but I have talked to people fairly recently that have done that.
    One thing that I hope to have added to the conversation (and people’s understanding) is that these computations do not have to happen on the operational satellite catalog system! They must still have the capability to dump the unclassified element sets to a transfer medium (CD perhaps?) and put them on an off line system to find potential collisions.

  19. One thing that I hope to have added to the conversation (and people’s understanding) is that these computations do not have to happen on the operational satellite catalog system! They must still have the capability to dump the unclassified element sets to a transfer medium (CD perhaps?) and put them on an off line system to find potential collisions.

    There are many supercomputer centers owned by the government that can do this processing. Believe me, this is likely to happen soon. However, they need to use algorithms that have gone through the verification and validation (V&V) process.

    There are places on the net where you can find conjunction predictions. The problem is that most if not all of these places use the less accurate GP processing so the margin of error is considerable. There was a predicted conjunction between these two satellites but the miss prediction distance was actually greater than many other predicted conjunctions for that day. They were all within the margin of error for the GP element sets.

    Air Force Space Command and the JSpOC don’t want to put out inaccurate predictions. What good would it do to say that someone’s satellite might pass within a kilometer (give or take) of another object? Would you perform a maneuver under those conditions? If so, of what magnitude and in which direction? Maneuvering a satellite consumes its propellant and shortens its life so they don’t want to do so unnecessarily.

    One of the problems is the shortage of highly skilled orbital analysts at the JSpOC. When they moved the mission from Colorado Springs to California, only a couple of the orbital analysts made the move (and I’ve heard that one of them has since moved back). They literally lost many decades of OA experience when they moved the mission. I’m not suggesting in any way that the people at the JSpOC are unqualifed or lack skill. I’m saying they’re seriously overworked so supporting CFE (commercial and foreign entities) has not been a very high priority in the grand scheme of things. That’s also changing as we speak.

  20. Well, if this does lead to a cascade, it may just be the incentive we need to dust off the old (real) Orion plans and build some spaceships worthy of the name:-)

  21. Larry J said: AFSPC and the JSpOC don’t want to put out inaccurate conjunction predictions. And that is certainly true. They do put out accurate predictions however including ones that prompt the Shuttle and Station to raise or (rarely) lower their altitude. There is a well established reaction where the satellite which moves – goes to an orbit where the probability of collision is much smaller. This is done and is well understood.

    And one thing that must have happened when the JSpOC moved was that the experienced people moved on to new assignments with junior people coming to replace them. So the corporate knowledge must have taken a big decrease. When I was an orbital analyst at the Cheyenne Mountain Complex a long time ago we could depend on experienced analysts that had been there for a while. But few people ever moved to a new assignment and came back. When I went on to a radar site I continued into other areas – for career broadening. It would have been bad for my career to have gone back to the same job that I had come from.

  22. It seems a utility vehicle in space to move just this type of object out of orbit would be essential to safely continue the use of satellites. However it would need a fuel depot from which to re-supply. If the collision causes even one more satellite loss is would start to make sense from an insurance standpoint to have a cost effective vehicle on standby to remove hazardous objects from orbit. A simple machine capable of removing dead satellites would be a good start.

  23. When I was an orbital analyst at the Cheyenne Mountain Complex a long time ago we could depend on experienced analysts that had been there for a while. But few people ever moved to a new assignment and came back. When I went on to a radar site I continued into other areas – for career broadening. It would have been bad for my career to have gone back to the same job that I had come from.

    I should’ve said that I was talking about the old hand civilians who worked up there seemingly forever. One of the guys who moved to Vandy had about 15 years of OA experience. He was no where near the most experienced OA there. He’s the one who moved back to the Springs.

Comments are closed.