Your mind as well here, because the desire for absolutes drives a lot of the craziness. I've argued with people about unforgeable tokens for example. There's no such thing.
The purpose of the paper is to outline what he calls "The Trust Paradigm" in VR, and apply it to the problem of an Open Grid. The reason for doing this is that previous paradigms of trust are rooted in what I call "proton think," or the belief that underneath the pile of signals is a pile of protons, and trust or data is about getting as close to that pile of protons as possible. But in VR, really, there is no pile of protons you can rely on. An avatar can be played by many people, one person can have many avatars, and simulators will run in the "cloud."
A trust paradigm, to get ahead of myself, is a series of games, each one of which has as a payoff the next step in the paradigm, and ending with a final result of a trust decision. You can't know whether it is safe to trust, you can only decide it.
"There is no proton," as he puts it, "Only the measurements." This is really how things work in SL with people exchanging millions of dollars, without knowing who the others are. We take in millions of linden a month and thousands of USD, I know, I think, ten of the people we do business with irl.
People talk about trust as if it were a matter of epistemology: what do you know and how do you know it. Instead, trust should be viewed as a decision, based on measurable things, against a background of experience, with a particular risk that the decision will not work based on future events which have some probability distribution which is known only in so far as it is reflected in past events. This means that the right way to think about trust is as a bayesian decision, where a the future is surface which is determined by a risk function, and whose boundaries can be determined based on the extremes of a metric space.
The initial work of this kind was done by Herbert Robbins, who in 1950 introduced Compound Decision Theory, and then later applied Bayesian statistics to it. This came to be called the Empirical Bayes Method, or Methods. (anal retentive alert, why is it "empirical Bayes" but always capitalized as an abbreviation?)
Compound decision theory showed that a decision involving many stochastically independent steps can have a reduced risk in the final decision, by using inter-relationships that can be found statistically. Or to put it another way, even things that seem to have nothing to do with each other, can have something to do with each other. In 1955 Robbins added a important idea, Empirical Bayes Theory. In this he showed that Bayesian statistics could be used to approximate the best possible decision rule, even without knowing the prior distribution. Or to put it another way, even something you don't know much about, you can know enough about.
His case was that of a Poisson distribution, that is a set of discrete events overtime which have no correlation since the last event. It is interesting to note that Poisson used these methods in cases of criminal and civil law.
The two pieces fit together, because many compound decisions break down into separate decisions which can't be determined exactly from previous data. This is often because the different parts overlap and are noise in each other's signal.
Or to quote:
The empirical Bayes approach in statistical decision theory is appropriate when one is confronted repeatedly and independently with the same decision problems. In such instances it is reasonable to formulate the component problem in the sequence as a Bayes decision problem with respect to an unknown prior distribution on the parameter space and use the accumulated observations to improve the decision rule at each stage.
One area which this is used particularly is in selecting the best possible population from a group with an unknown distribution. This is, in spades, hearts, diamonds or no trump, the problem of trust in the VR environment. Trust is a compound decision, the distribution is unknown, but the decisions are repetitive.
Particularly when the goal is to select an optimal population. This is important to trust because the ultimate goal is not just to deny bad actors the chance to do something bad, but to weed out the worst actors from the population entirely.
So to summarize: trust events are based, from the point of view of both servers and users, as a series of events. They must decide based on their own desires whether to take a risk in extending a capacity or exchanging information. The interval between these decisions is random, and the behavior of the other entity is random. So there is a parametric space, that is the data an entity gets, a distribution, part of which is unknown, a utility function, and the need to break the compound of all of this down into discrete steps. This is exactly what EB does from the high level.
Now why is this important?
There is a great deal of looseness in talking about trust, and a desire that we get to the real physical things. Most trust tries to create a root of trust which is known, and everything else is compared to that. The problem is, this is not the minimum risk, and the larger the data sample, the riskier it gets, because the root is more and more likely to be corrupt or a target. Centralized single point of failure and all that.
What this does is it moves the question from abstract epistemology, to the questions of what is desired, what is measured, and what can be infered. It provides a basis for saying, exactly, what trust is:
Trust is the decision to accept a particular risk in granting a particular capacity or set of capacities in light of a distribution of future arrivals in a poissonian distribution.
The problem of trust is to reach as close to the optimal risk rules. This is why when you look at papers centered around empirical Bayes Theory, they talk about "asymptotic" approaching of an optimal rule Q. The idea is to get "as close as you need."
However, what those discrete steps are, is not provided by Empirical Bayes theory. For that we need the theory of games, which I will outline in the next part.
No, I'm not this smart. Yes, I passed the statistical calculus class, so the squiggles aren't all blurry and I can follow the argument.
Zhang Cun-Hui "Compound Decision Theory and Empirical Bayes Methods", The Annals of Statistics, Vol. 31, No. 2 (Apr., 2003), pp. 379-390 Institute of Mathematical Statistics
Banerjee “Simplification of the Derivation of Wald's Formula for the Cost of Living Index”, Econometrica, Vol. 24, No. 3 (Jul., 1956), pp. 296-298
Eaton, Morris L, “Complete Class Theorems Derived from Conditional Complete Class Theorems” The Annals of Statistics, 1978 Vol. 6 No. , 820-7
Matthes and Traux (1967) Test of Composite Hypotheses for the Multiparameter exponential family” Annals of Mathematical Statisics 38 681-697
Robbins,Herbert "An Empirical Bayes Approach to Statistics", Proceeding of the Third Berkeley Symposium on Mathematical Statistics, volume 1, pages 157-163, University of California Press, Berkeley, 1956.
Robbins Adaptive Statistical Procedures and Related Topics: Proceedings of a Symposium in Honor of Herbert Robbins, June 7-11, 1985, Brookhaven National Laboratory, Upton, New York
Snijders, Tom “Complete Class Theorems for the Simplest Empirical Bayes Decision Problems” The Annals of Statistics, Vol. 5, No. 1 (Jan., 1977), pp. 164-171
Wald, A. “A New Formula for the Index of Cost of Living” Econometrica, Vol. 7, No. 4 (Oct., 1939), pp. 319-331