Find the prior that is the result of applying Jeffrey’s general rule to Y = Pn i=1 Xi...

Find the prior that is the result of applying Jeffrey’s general rule to Y = Pn i=1 Xi . (d) Suppose n = 6 and the data X = (X1, . . . , Xn) are observed at x = (0, 2, 0, 0, 1, 0). Specify the prior as that which you deduced in part c; then use a Metropolis-Hastings algorithm to sample from the posterior distribution of ?, given X = x, and report simulated values of the posterior mean and the 2.5%, 25%, 50%, 75%, 97.5% quantiles. Problem 4: Suppose X = (X1, . . . , Xn) is an absolutely continuous IID sample with common marginal CDF FX(x) = 1 - e -x/?. That is, each Xi ~ exponential(?). Suppose the inference objective is point estimation of the parameter ?, which is to be evaluated under the loss function L(?, a) = (? - a) 2 . Consider the decision rules T(X) = n -1 Pn i=1 Xi and U(X) = cX(1), in which X(1) = min Xi is the first order statistic of X and c is a fixed constant. Calculate the risk functions RL(?, T) and RL(?, U) of these two estimators, under the loss function specified above. What can you say about their comparative performance? (Note: Recall that CDF of the first order statistic is FX(1) (x) = 1 - {1 - FX(x)} n.) Problem 5: Consider a decision-theoretic setup in which the parameter space is a finite interval T = [a, b], the action space is all of the real numbers, A = R, and the loss function is L(?, a) = h(|? - a|) for some strictly increasing function h. S

Document Preview:

STAT7100: IntroductiontotheFoundationsofStatistics
HomeworkSet1: DueFebruary1
Textbookproblems: 7.23 (see notes), 7.24 (see notes), 7.33.
Notes,additionalinstructions,andadditionalproblems:
Problem 7.23: The textbook author is making a blanket assumption that a Bayes estimator is always
the mean of the posterior distribution. However, we know that the posterior mean will change under re-
parameterization, and other estimators are possible as alternative location summaries of the posterior dis-
tribution, such as a median. The authors arrive at their assumption by applying a certain decision-theoretic
viewpoint on constructing Bayes estimators, which you can read about in the second half of Section 7.3.4.
Problem7.24: In addition topartsa andb, complete the following parts.
P
n
(c) Find the prior that is the result of applying Jeffrey’s general rule toY = X .
i
i=1
(d) Supposen = 6 and the dataX = (X ;:::;X ) are observed atx = (0; 2; 0; 0; 1; 0). Specify the prior as
1 n
that which you deduced in part c; then use a Metropolis-Hastings algorithm to sample from the posterior
distribution of, givenX =x, and report simulated values of the posterior mean and the 2.5%, 25%, 50%,
75%, 97.5% quantiles.
Problem 4: Suppose X = (X ;:::;X ) is an absolutely continuous IID sample with common marginal
1 n
x=
CDF F (x) = 1e . That is, each X exponential(). Suppose the inference objective is point
X i
2
estimation of the parameter, which is to be evaluated under the loss functionL(;a) = (a) . Consider
P
n
1
the decision rulesT (X) =n X andU(X) =cX , in whichX = minX is the ?rst order statistic
i (1) (1) i
i=1
ofX andc is a ?xed constant. Calculate the risk functionsR (;T ) andR (;U) of these two estimators,
L L
under the loss function speci?ed above. What can you say about their comparative performance? (Note:
n
Recall that CDF of the ?rst order statistic isF (x) = 1f1F (x)g .)
X X
(1)
Problem 5: Consider a decision-theoretic setup in which the parameter space is...