August 25, 2004 8:52 PM

Foreign Policy


I am sure I'll get flack from some people about my last post. They'll say "So what is your suggested alternative to our current foreign policy? It is easy to attack other people's ideas, but it is much more difficult to present an alternative."

Fair enough. I'll tell you what I'd prefer our government's foreign policy to be, assuming we need to have a State at all. My proposal is pretty simple: Swiss-style armed neutrality. That means no invasions, no military threats, no foreign aid, no "covert operations", no military bases outside the country, no attempts to influence the internal affairs of foreign countries whatsoever.

No one blows up bombs in the streets of Geneva. No one from Switzerland gets kidnaped in third world countries to protest the evils of Swiss foreign policy. Wherever they go, at worst, people think of the Swiss as boring — it is rare that anyone feels the need to buttonhole someone from Zurich or Lugano and tell them off for what their government does.

The Swiss are not pacifists, though. They have a very strong militia for defense, and in times past when Europe was less peaceful, it would have been extremely costly for an attacker to invade them. Even if (in the case of particularly strong enemies) an invasion might have ultimately succeeded, it would have yielded very little of value at an astonishing expense.

Such a foreign policy perfectly suits the minarchist excuse for government &mdash that it exists to protect its citizens and their property from violence within the borders of the country. It is pretty inarguably perfect for that purpose. (I'm not a believer in the necessity of even a minimal state, but that's not today's discussion.)

I think the U.S. would do just fine with such a policy. It is unlikely that many countries could attempt an invasion of the U.S. given our geography. With a strong militia, armed to the teeth, no such invasion would likely succeed even if someone was foolish enough to try. In addition, we have a large nuclear arsenal, which should make any potential attacker worry about the fate of their home territory. The nuclear weapons pretty effectively deter any attempts at missile based attacks, too. Realistically, were we neutral, we would not be attacked at home if we were even moderately careful.

For a while, we might still get terrorist threats from people who hadn't realized that we had withdrawn our forces from overseas and weren't going to bring them back, but those would fade after a while. In the long run we'd be fine.

Such a policy is also far, far cheaper than the one we pursue now — the economic benefits alone would be more than worth it.

Some might argue that we would not have a force capable of deterring attacks on U.S. shipping — especially oil shipments — without a strong military capable of foreign intervention, but I don't believe that such a use for the military is good idea in the first place. For one thing, it distorts the market for commodities like oil because the market price does not reflect the true cost (including armed security) of importing the commodity. My solution would be for the oil companies to simply hire private security to guard their own tankers and leave it at that — if the cost is high, then let the market price for oil reflect that.

Some might also argue that a strong military is needed to defend U.S. citizens overseas, but I doubt that. As I noted, how often are the Swiss targeted for political reasons?

Lastly, some might argue that we have an obligation, as a nation, to defend the interests of those under the thumbs of totalitarian regimes abroad. As I've noted elsewhere, however, U.S. foreign policy has propped up and indeed created totalitarian regimes far more often than it has attacked them. This is a simple instance of the universal rule that governments don't do what you want them to do — they do what public choice economics causes them to do. We can dream all we like, but governments are made up of people with their own agendas.

Furthermore, as I've also noted elsewhere, I have no objection to people spending their own resources and risking their own lives liberating the downtrodden in the third world, or persuading others to do so voluntarily, but the Non-Coercion Principle that we libertarians follow says that we don't use force to get others to spend their money and risk their lives for our causes, no matter how noble our cause may be. Whether the purpose is curing cancer or building a football stadium, coercion is still coercion, and libertarians don't coerce others into paying or doing.

By the way, this is all pretty standard stuff. Libertarians have been advocating this position for decades, and I don't understand how it can be the least bit controversial among people of our political clan at this point.


Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Politics, Security

August 20, 2004 12:57 PM

Hash Function Roundup


Ekr has posted a good summary of the recent results from Crypto '04 on the cryptanalysis of hash functions. The general gist is that, as of right now, SHA-1 and its "SHA-2" descendents have not yet been successfully attacked, but most of the others have.

Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Science & Technology, Security

August 16, 2004 5:50 PM

More Hash Functions Broken


Following up on this rumor from earlier today.

There is still no confirmation out there of the break in SHA-1, but this preprint, which went up today, reports collisions in MD4, MD5, HAVAL-128 and RIPEMD, all achieved with very little CPU time. That pretty much covers all the cryptographic hash functions in use.

It feels as though once someone found the right thread to pull on, the whole sweater started to unravel.


Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Science & Technology, Security

August 16, 2004 4:01 PM

Rumors of breaks in SHA-1


This will probably be incomprehensible to many of my readers — if you don't know anything about cryptography you might not even care about it. See this Wikipedia article if you would like an introduction to the topic of cryptographic hash functions.

Chen and Biham were due to report some attacks on SHA-0 this week at Crypto. Last week, it was reported that Antoine Joux had extended this work into a full scale method for finding collisions in SHA-0 with time complexity of 2^51, and would also be reporting his results at the conference.

Ed Felten is now reporting that a rumor has started at Crypto that someone has further extended the Joux attack to an attack on SHA-1 and may announce the details at conference later in the week. Since SHA-0 is only of academic interest but SHA-1 is deployed in lots of cryptosystems, this is naturally getting lots and lots of buzz.

As a side note, if this proves to be true, even if it is only a certificational weakness, it will be very embarrassing to the NSA. It is almost certainly the case that they would not release an algorithm that they knew had even a certificational weakness, thus implying that if there is such an attack, they did not know about it when they corrected SHA-0 into SHA-1.

It is unclear how such a break would impact HMAC when used with SHA-1 without knowing more details, if there are any details. Stay tuned.


Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Science & Technology, Security

August 10, 2004 3:03 PM

Stupid Virus Blocking


This entry is directed at the frustrated computer system administrators of the world. The rest of you may have no idea what I'm talking about.

Worried about computer viruses striking your network?

My method of stoping viruses from striking my users is phenomenally effective, yet incredibly cheap. I simply block all email attachments bearing Microsoft file types that my users are unlikely to have any real desire to get in email.

At the moment, that means I have a Postfix configuration that contains the following header_checks:

/^Content-(Type|Disposition):.*(file)?name=.*\.(asd|bat|chm|cmd|com|cpl|dll|exe|hlp|hta|js|jse|lnk|ocx|pif|rar|scr|shb|shm|shs|vb|vbe|vbs|vbx|vxd|wsf|wsh|zip)/ REJECT Sorry, we do not accept .${3} files.

For those that don't understand what that means, it instructs Postfix to look for message headers indicating any of a long list of attachment types, and if it finds one, to refuse to accept the message, indicating "Sorry, we do not accept filetype files." to the sender. If you don't use Postfix as your MTA, I'm sure that you can do similar things in most other sane MTAs. (If you use Microsoft Exchange as your MTA, you are out of luck, but then again you are probably out of luck anyway.)

This approach is a bit heavy handed, but I find that most of those file types are never included in any sort of legitimate email. Who would want to legitimately mail someone a .pif or .lnk file?

The big plus of the approach is that at the cost of one line of configuration, you pretty much ditch any possibility of ever seeing the next Microsoft Outlook virus. No one will ever send you an infected .exe or .scr file because you reject all of them — you will never have to worry that your virus scanner's rules are not up to date or something similar.

What are the minuses of doing this? Well, first, some users will occasionally want to get zip files in the mail. If you have no choice, you can let them through, but in practice I've never gotten complaints about this and I forward mail for lots of people. Second, this will not stop macro viruses that infest .doc and .xls files and the like. It isn't a complete substitute for having a virus scanner, though it does remarkably well.

In general, this is a really cheap and efficient barrier to put at your outermost MTA, and the people and organizations I know who have done it have never regretted it.


Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Security, Software

July 28, 2004 9:59 PM

Non-Risk of the Day


news.com.com.com.com is reporting that someone has realized that you can reprogram RFID tags so the scanner at the checkout thinks that you're buying something cheaper than you really are.

Of course, doing this exact same thing with printed bar codes by making up fake ones and sticking them on merchandise has been floating around for a long time. I don't see how the RFID threat is significantly different.


Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Security

July 27, 2004 12:03 PM

News Flash: Proprietary OS Vendor Dislikes Linux!


Dan O'Dowd, the CEO of Green Hills Software (which sells proprietary operating systems, often for defense contracts) has written an article in which he argues that Linux (and by implication, all open source software) should not be used in defense contracts. He claims that open source is a major security threat to defense systems, because evil foreign agents could infiltrate the open source developer community and insert trojan horses into software later used for military purposes.

I'm a big believer in avoiding the Ad Hominem fallacy, so even though Dan O'Dowd has every reason in the world to make this up from whole cloth to protect his business, lets treat the claim seriously and address it.

It is true that evil foreign agents could try to get trojan horses into the Linux sources (as could evil domestic agents). However, they could also get jobs with companies like, say Green Hills, or other defense contractors. The latter would seem like a far more direct route to sabotage, since you get a close look at how your software will be used and thus can plan your sabotage much more effectively.

Although it is true that people working on defense contracts usually have security clearances, it is far from clear that such clearances actually prevent espionage or sabotage. I know of no studies that validate the methodology used in security clearances, and certainly the "security clearance" barrier hasn't prevented lots of folks from causing damage to U.S. interests even when they've had the clearances.

It is also the case that much of the software that goes into defense systems is produced by people with no clearances whatsoever -- I doubt that Green Hills, for example, always goes through the trouble of clearing the guys who work on their base software products if they are not going to be doing classified work.

We also have the question of the "many eyes" theory of open source security, which O'Dowd makes fun of. Many open source advocates note that since anyone who wants to can read the source code to an open source product, it is harder to conceal back doors. O'Dowd attacks this by saying that there are none the less security holes found quite regularly in Linux. What he does not mention is that there are also security holes found quite regularly in Windows and other proprietary operating systems, and that there might even be security holes in his own products. The question we are looking at here is not whether or not there are bugs -- the question is whether it is easier or harder to conceal an intentional flaw in an open source system.

Although it is true that the ability of large numbers of people to read the code is no panacea, it certainly is a help. There are comparatively few people who get to read the code in proprietary systems, such as the ones Green Hills sells, so there are fewer people in a position catch a trojan inserted by a rogue programmer.

Mr. O'Dowd also misses one of the most important aspects of security -- he fails to discuss the economic tradeoffs (if any) being made in a given security decision. He mentions only the possible problems of using an open source operating system, but he ignores the price associated with not using one. Against the weak claim of decreased security, we have to balance the loss of functionality and increased cost that using a proprietary operating system might cause. Developers do not select open source software at random. They adopt it because it gives them better functionality and has a lower cost.

Indeed, the cost savings and productivity benefits of open source systems might easily make it possible to devote more effort to security in a design, and the improved tools available can make security far easier to implement. Open source operating system users take features like packet filters, MMU based memory protection for multiple processes, logging facilities, etc., for granted, but these features not available in many conventional embedded operating systems. Even the ones that do have any particular feature rarely provide the breadth of functionality of the open source systems.

Lastly, let me note that Mr. O'Dowd appears to be inventing the threat he describes. I doubt he has any actual evidence of evil foreign agents trying to subvert defense products by sneaking trojan horses into the Linux source base. If he does have such evidence, he did not mention it.

Overall, I think his argument against open source is pretty weak. I don't think defense agencies should give it much heed.


Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Open Source, Security

July 26, 2004 11:47 PM

Potemkin Security at the DNC


Another interesting find at Cryptome: according to this set of web pages, security at the Democratic National Convention turns out to be quite unprofessional in places.

I was a bit skeptical, but a friend of mine who is a ham radio operator in Boston confirms the radio frequencies posted are unencrypted and are indeed being used as stated, and the photographs of unprotected facilities speak for themselves. The descriptions in the report are a bit breathless, but they appear to be essentially plausible.

It seems that Potemkin Security is everywhere -- even at a national political convention. We're willing to shut down all the highways in Boston, but no one will even think to properly install fencing or to encrypt security communications.


Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Politics, Security

July 25, 2004 3:27 PM

Transport Insecurity


I generally think that trying to stop people from bringing pocket knives onto airplanes isn't very useful. It is an example of what I like to call "Potemkin Security" (or what Bruce Schneier calls "Security Theater"). It provides the feeling that something is being done even if it doesn't actually accomplish much, and thus gives people the ability to say "see, we're doing something about security!"

I mention this today because a friend of mine just told me that they had accidently flown out of LaGuardia Airport a few days ago with a Leatherman in their bag, and hadn't realized it until someone caught it when they tried to board their flight back to New York today. I hear stories like this all the time, and there are even some known incidents of people accidently bringing firearms onto aircraft without anyone stopping them. (I suspect those might happen routinely but for the fact that there are very few people who forget that they are carrying a gun and then try to board an airplane.)

I suppose it shouldn't be surprising that the TSA doesn't even do the wrong job very well. It seems like a fine example of what happens when people demand that the government "do something" about a problem, without contemplating too seriously what the right "something" might be.


Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Politics, Security

July 25, 2004 1:12 PM

Shocking News: Government Agency is Ineffective!


An article in information week reveals the shocking fact that the Department of Homeland Security's efforts to fight "cybercrime" are "plagued by problems". One good quote:
"Despite the progress made, DHS faces significant challenges in developing and implementing a program to protect our national cyber-infrastructure," Ervin's report said.

Of course, one asks what they legitimately could do to "protect our national cyber-infrastructure". Those of us who are actually involved in computer security are working pretty hard to come up with solutions to things like denial of service attacks, viruses, and other issues. There isn't terribly much they could be doing other than law enforcement, and they don't seem to ever do any of that. People are, for practical purposes, never prosecuted for computer break-ins. (There are prosecutions, but they constitute a microscopic fraction of the number of incidents.)

One of the things I find bizarre about the whole thing is that the government is under the delusion that it is, in fact, involved. They spend money and have departments with appropriate names and such, but so far as I can tell none of it has any connection to reality. (I'm not including the folks at places like NSA who actually do computer security for their organizations every day. I mean the various "information security task force" types.)

So, there are folks in Washington who must go in to the office every day and think they are involved with keeping our networks secure, when in fact nothing they do has any impact on the problem at all. This kind of thing appears to be a common feature of large bureaucracies. I've been struggling to come up with a pithy word or metaphor for it without much success. The only thing that pops into mind for me today is the Aztec priesthood. Those where the folks who thought that if they didn't cut out someone's heart every day, the sun would stop rising.

It is sort of the inverse of a "Cargo Cult". Instead of your actions bringing about no results even though you think you're doing everything right, the results you want keep happening even though your actions have nothing to do with it at all, and you are convinced you are the cause.

This brings up a couple of questions.

  • Is there a good word or phrase for this sort of thing? That is, is there a good word for "people who think they're doing something but who are in fact completely uninvolved?" There are excellent phrases for similar concepts -- "Potemkin Village", "Cargo Cult", etc., -- but none of them quite capture the idea precisely.
  • Is it actually for the best that these folks are kept busy thinking they're involved when they aren't, so that they don't cause damage by actually becoming involved? It doesn't seem as though we can prevent the government from wanting to "do something" about computer security, so maybe keeping them occupied with reports, studies and "coordinating activities" is, in fact, a good thing.

Addendum: A friend writes to me and says: The best comment I've heard about DHS is "They can't even piss through an open window."


Posted by Perry E. Metzger | Send Feedback | Permalink | Categories: Politics, Security