My god, what a nightmare. I'm not a technical person, but the more I read, the more I realized how broken by design this language is. Let me quote the block that made me wonder who was drinking what when they concocted this misbegotten mess:
Let us look at an example in a computing context, of how keys/capabilities would change security.
Consider the Melissa virus, now ancient but still remembered in the form of each new generation of viruses that use the same strategy the Melissa used. Melissa comes to you as an email message attachment. When you open it, it reads your address book, then sends itself - using your email system, your email address, and your good reputation - to the people listed therein. You only had to make one easy-to-make mistake to cause this sequence: you had to run the executable file found as an attachment, sent (apparently) by someone you knew well and trusted fully.
Suppose your mail system was written in a capability-secure programming language. Suppose it responded to a double-click on an attachment by trying to run the attachment as an emaker. The attachment would have to request a capability for each special power it needed. So Melissa, upon starting up, would first find itself required to ask you, "Can I read your address book?" Since you received the message from a trusted friend, perhaps you would say yes - neither Melissa nor anything else can hurt you just by reading the file. But this would be an unusual request from an email message, and should reasonably set you on guard.
Next, Melissa would have to ask you, "Can I have a direct connection to the Internet?" At this point only the most naive user would fail to realize that this email message, no matter how strong the claim that it came from a friend, is up to no good purpose. You would say "No!"
And that would be the end of Melissa, all the recent similar viruses, and all the future similar viruses yet to come. No fuss, no muss. They would never rate a mention in the news. Further discussion of locally running untrusted code as in this example can be found later under Mobile Code.
Before we get to mobile code, we first discuss securing applications in a distributed context, i.e., protecting your distributed software system from both total strangers and from questionable participants even though different parts of your program run on different machines flung widely across the Internet (or across your Intranet, as the case may be). This is the immediate topic.
This is patently idiotic, and is the product of an egomaniac's delusional ravings. I've used Vista, which asks every two steps to do this that or the other. I won't use it again. Why? Because having a user guard ever single capability for ever single time means dozens of confirmations. What do people do? They turn them off after the first few times. If this is what capabilities does, then caps are useless. Instead what we want is a situation where safe use cases are recognized, and only strange ones are not. This would mean say, something smart enough to grab the whole list of capabilities required, and present us with a dialog box that gives us some idea of what is going to happen.
Absent that, it is nagware.
But there's more:
In the real physical world, if you had to depend on children to fetch CDs, you would not use an ID badge. Instead you would use keys. You would give the child a key to the front door, and a key to the CD cabinet. You would not give the child a key to the gun vault.
All current popular operating systems that have any security at all use the ID badge system of security. NT, Linux, and Unix share this fundamental security flaw. None come anywhere close to enabling POLA. The programming languages we use are just as bad or worse. Java at least has a security model, but it too is based on the ID badge system--an ID badge system so difficult to understand that in practice no one uses anything except the default settings (sandbox-default with mostly-no-authority, or executing-app with total-authority).
The "children" are the applications we run. In blissful unawareness, we give our ID badges to the programs automatically when we start them. The CD cabinet is the data a particular application should work on. The gun vault is the sensitive data to which that particular application should absolutely not have access. The children that always run to get a gun are computer viruses like the Love Bug.
In computerese, ID badge readers are called "access control lists". Keys are called "capabilities". The basic idea of capability security is to bring the revolutionary concept of an ordinary door key to computing.
I'm no security expert, but if an area is really supposed to be secure, then it has ID badges, not keys. The problem with keys is that they don't know who uses them. What we really want is a key with an access control list and a time on it, like, for example, a credit card number that is one use.
Marc Stiegler likes to be very judgemental about others, so I will apply the same standard to him. He's a arrogant moron who produced a monument to his ego, which is a complete disaster area. He wrote this in 2000, and there is a reason why "E" isn't sweeping the planet. Because it is the creation of a narrow mind who makes huge claims and then delivers something which self-evidently won't work.