Category Archives: Technology and Society

Cable TV Regulation

Via Technology Review, and article on the technical objections to a la carte cable service. Turns out the complaints by Comcast and Time Warner that it’s technically difficult are flat out BS. Surprise!

You’d think that the cable companies would stand to benefit by going to an a la carte model – I know I’d be much more likely to get cable if I could pick and choose, and pay for only those channels I’m interested in. Also, by letting customers pick channels for themselves the cable companies would have a much better read on what their viewers are interested in, which would help pitch advertising better.

I dislike government telling businesses how to run their operations, so I oppose forcing cable companies to go to an a la carte model. The fact that the media megacorps feel the need to shade the truth about the costs is interesting, though. Much more worthy of government intervention to my mind is the simple fact that media megaconglomerates exist. Concentrations of power are a threat to liberty regardless of whether they are governmental or private. Concentrations of power within the media are particularly dangerous, because they can shape our perceptions of the world. If there’s any area where heavy handed intervention in the marketplace is justified, it’s in breaking up media conglomerates.

Incidentally, I realize there’s a widespread view within the blogosphere that blogs represent a revolution in information accessability that make old media irrelevant. This is such a dumb notion that I have a hard time figuring out how to address it without insulting the reader’s intelligence. Blogs are a new, parallel information source (with a godawful signal/noise ratio), which offers access only to people who actively seek it out. Suffice to say the number of people reading blogs for information which challenges their preconceptions is small. If blogs become people’s primary information source about the world, the US will fragment into tiny groups of people whose worldviews are so different that meaningful communication between them is effectively impossible. We’re headed that way now, so maybe I should just stop worrying about it.

More Computer voting

Via MIT’s Technology Review, an item on computer voting and the upcoming election.

There was a particularly stupid an ill-informed op-ed (warning: audio link) on PRI’s show Marketplace yesterday. Basically the commentator felt that since ATMs are so reliable, we should trust voting machines. This completely ignores that fact that ATM errors have multiple redundant means of catching errors, since they generate a paper trail at the time of the transaction, the customer has additional opportunities to catch errors when they receive their bank statement, and the bank has enormous incentives to ensure correct accounting if they want to stay in business. If there is a potential problem with an ATM it can be taken off line for a couple of days until it is fixed.

In the case of electronic voting machines, they are put to the test once every couple of years, set up by people with minimal training, there is no independent audit trail, and there is considerable incentive to falsify votes, knowing that if you are successful you or your allies will control the investigation into what happened. Only an independent voter-verifiable audit trail can make electronic voting credible. Unfortunately my state (MD) is dragging its feet on this issue despite a well organized effort to knock some sense into the heads of the Election Commission.

I blogged this topic earlier, and I’ll do it again before the election. This is the single most important technological issue facing the US. We have the potential to completely invalidate elections. Without trust in the electoral process government has no legitimacy, and people will be forced to accept disenfranchisement or resist with force. That may sound like hyperbole, but I suggest you think carefully about the likely reaction if there is a significant split between exit polls and reported (utterly unverifiable) election results in a hotly contested election. I don’t think rioting is at all unlikely, and public officials hanged from lamposts is a real possibility. It’s all well and good to joke about that being a good thing, but there’s no guarantee that the officials hanged are the guilty ones, or that large scale public disorder will in any way actually address the problem. Just ask Reginald Denny.

I spent four hours last night working with commonly used commercial software which crashed three times. It was MicroSoft Word, so there’s something of an expectation that it’s a P.O.S., but it’s at least as heavily tested as the Dielbold software that I’ll be using to cast my vote in November. My confidence in the system working as it should is not high.

Bad, Bad, Bad idea

There’s a bill working its way through congress that will criminalize sale of technology that intentionally induces a person to infringe copyright. That places all recording media under threat. This is one of those bills which is written at the behest of major corporations looking to compete via legislation rather than the marketplace.

Information simply cannot be force fit into the conventional mold of property rights law that originated in the ownership of land. Patents are workable as a means of protecting intellectual property, though they have been abused somewhat recently. Copyrights on the other hand are being abused and manipulated to an unprecedented degree. We recently saw the extension of copyright by an additional 20 years (thanks to some heavy lobbying by Disney, among others), and there’s no doubt that when those 20 years are up efforts will be well under way to extend by another 20. The copyright system is broken, and this latest bill will just break it still further. We need to completely rethink the way we handle copyrights from the ground up. I can’t claim to know what the answer is, but it’s clear what it isn’t: banning technologies just because they can infringe copyright. That is an idiotic route that leads to making pen and paper technically illegal.

The latest Crypto-Gram

Crypto-Gram is a monthly newsletter on security issues put out by Bruce Schneier of Counterpane Internet Security. I’ve mentioned it before, but it bears repeating. the link above is to the latest issue, which includes a well argued piece on handling terrorist suspects without skirting the Constitution. Schneier argues that it’s not necessary to work around established due process rules in order to deal effectively with terrorism. There are a couple of other really good items in this issue, notably the item on economic motivations for security theater (insurance companies will give you breaks on premiums if you install X-ray machines, even if you don’t use them effectively), and the item on ICS, a company selling an encryption scheme which they claim – get this – uses no math. Brilliant.

Anyway, if you’re at all interested in security issues and the tradeoffs between security and liberty, go on over and take a look.

Myopic

John Derbyshire has been asking questions about why frozen sperm survives freezing, and gets a knowledgable email on the subject. The emailer does understand the issues, except for this:

A good post-thaw viability (survival of cells) is around 60% of the total of cells– some people advertise >80% or 90-%, but that is a bit of a ‘lie via statistics’ game– they don’t count all the dead population in computing the percentage. We are working here with different, more efficacious, and non-toxic CPAs, of which the most promising appears to be arabinogalactin extracted from larch trees.

As you can see, this is the reason that we will never get Ted Williams back among the living. His frozen body consisting of billions of cells simply would not work with only ~60% of the cells surviving the thaw process. As one can say, God instills the soul when He wishes, and outsmarts us all.

This, of course, presumes that the only method we will have, now and forever, is crude thawing. It ignores the future possibility of different techniques for restoring the tissue to room temperature and viability (e.g., nanomachinery that repairs as it warms). It’s fair to have an opinion that we may never have such capability, but it’s quite foolish, I think, to believe categorically that this is so.

More Supersonics

Kevin Murphy has some thoughts about supersonics, based on my previous post. He’s skeptical.

Given that he’s not stooped to calling me a scientific lightweight, and incapable of understanding mathematics, that’s fine, but he doesn’t really understand the whole picture, which is understandable since I haven’t really presented it. This is a matter of some frustration to me, but one that I can do little about until I can persuade the company involved to put up information on the web, so that it can be critiqued and reviewed.

Regardless, I’ll try to respond to his comments as best I can under the circumstances (which include limited time on my part).

…even if you have the same drag coefficient at supersonic as you do at subsonic — your drag, and thus fuel consumption, will increase substantially.

The key clause here is “if you have the same drag coefficient at supersonic.” At least for the wing, it’s actually possible to do better, at least in terms of induced drag (an effect of the end of the wing, which makes it greater than two-dimensional) which is actually improved at higher speeds. The notion, right or wrong, postulates that supersonic L/D for aircraft designed under this theory will be similar to that of subsonic aircraft, so it offers the potential (if not promise) of airfares comparable to subsonic fares for the same routes.

With regard to his comments on angle of attack, they’re not relevant, because any angle of attack that is non-zero will dramatically increase wave drag and induce shock waves. The aircraft’s nominal design condition is zero AOA. Takeoff and time to cruise aren’t an issue, either (as isn’t the engine) because we can get rid of the extreme sweep that has always been associated with supersonic aircraft (a design strategem that was always a kludge to come up with a way of minimizing wave drag without solving the fundamental problem).

Something like the SR-71 engines are a likely solution, in terms of the inlet, but that’s not a problem because they’ll be optimized for fuel economy at cruise speed (which will constitute most of their operating time), not takeoff/landing. Also, we’re not proposing anything as fast as the Blackbird–Mach 2.4 will probably be adequate.

But here is really the crux of the issue.

The claim is that with enough leading edge sharpness and the proper contouring behind, you can fly supersonically without shockwaves, except circulation (flow around the airfoil) which produces lift elimates the shockless effect. Why would this be? Well, without lift on a sharp symmetric airfoil the stagnation point would the the leading edge. If you add circulation, perhaps you move the stagnation point so that it is no longer on the leading edge. Could this be the problem? The flow splits at the stagnation point (that’s where it stops), and if it isn’t sharp where it splits, you get a shockwave? If that is the case, well, we’re screwed. No amount of adding in balancing circulation downstream will matter, and adding it to the flow over the wing to cancel it out will mean an end to the lift from the wing. Now you could make an unsymmetrical airfoil such that at the cruise condition the stagnation point is on the sharp point of the airfoil, but you’d have shockwave drag getting to that point (or if you had to fly off design point.)

The proposal is not to build a symmetric airfoil. Stagnation points really aren’t relevant.

Imagine a Busemann biplane, which is really a DeLaval nozzle inside two wings. The top of the upper wing is flat, as is the bottom of the lower wing. That allows the airflow to move past without shock. The ramping occurs within the two wings. Now, Busemann showed that this will have a shock-free flow, but because of the symmetry, it has no lift. Now imagine that the lower wing is dynamic–it’s actually a supersonic airflow coming from a non-shocking duct, with a flat lower surface. The lower surface of the “biplane” (after a short ramp) is a stream of higher-energy air (to satisfy Crocco), that mixes the total flow to provide the anti-circulation to balance the wing circulation.

The idea is to provide that balance to eliminate the need for the highly entropic downstream vortices, that require far more energy than that required to simply provide that balance. It spreads the residual shocks over a much larger footprint, reducing almost to insignificance the PSF on the ground, and essentially eliminates the wave drag.

Bottom line: if this works (and I don’t claim that it will–only that it’s not obvious to me that it won’t), this means wide-body supersonic aircraft, at non-ozone-eating altitudes, at ticket prices comparable to subsonic ones. It means obsolescing the current subsonic fleet in the same way that prop-driven airplanes were put out of business by jets, other than niches.

I think that it’s worth spending a tiny fraction (how about a percent of one year’s budget?) of the billion-plus dollars that NASA wasted on the High-Speed Research program, but NASA didn’t agree in the late nineties, even when Congress specifically appropriated it.