“Probability and Statistics – The Science of Uncertainty”

 

 

Chapter 1
Book: Michael J. Evans and Jeffrey S. Rosenthal, “Probability and Statistics – The Science of Uncertainty”

Homework Part:
Answer the following questions. In addition, you should solve three more questions by changing the given questions to another option.

1.2.13 Consider again the uniform distribution on [0, 1]. Is it true that
P([0,1])=∑_(s∈[0,1])▒P({s}) ?
How does this relate to the additivity property of probability measures?

1.3.9 Suppose we choose a positive integer at random, according to some unknown probability distribution. Suppose we know that P({1, 2, 3, 4, 5}) = 0.3, that P({4, 5, 6}) = 0.4, and that P({1}) = 0.1. What are the largest and smallest possible values of P({2})?

1.3.10 Generalize the principle of inclusion–exclusion, as follows.
Suppose there are three events A, B, and C. Prove that
P(A ∪ B ∪ C) = P(A) + P(B) + P(C) − P(A ∩ B) − P(A ∩ C) − P(B ∩ C) + P(A ∩ B ∩ C).
Suppose there are n events A1, A2, . . . , An. Prove that
P(A_1∪… ∪A_n )=∑_(i=1)^n▒〖P(A_i )- ∑_█(i,j=1@i<j)^n▒〖P(A_i∩Aj)〗+ ∑_█(i,j,k=1@i<j<k)^n▒〖P(A_i∩A_j∩A_k)〗-… ±P(A_1∩⋯∩A_n)〗
(Hint: Use induction.)

 

1.4.21 (The birthday problem) Suppose there are C people, each of whose birthdays (month and day only) are equally likely to fall on any of the 365 days of a normal (i.e., non-leap) year.
Suppose C = 2. What is the probability that the two people have the same exact birthday?
Suppose C ≥ 2. What is the probability that all C people have the same exact birthday?
Suppose C ≥ 2. What is the probability that some pair of the C people have the same exact birthday? (Hint: You may wish to use (1.3.1).)
What is the smallest value of C such that the probability in part (c) is more than 0.5? Do you find this result surprising?

 

 

1.5.7 Suppose a baseball pitcher throws fastballs 80% of the time and curveballs 20% of the time. Suppose a batter hits a home run on 8% of all fastball pitches, and on 5% of all curveball pitches. What is the probability that this batter will hit a home run on this pitcher’s next pitch?

1.5.14 Prove that A and B are independent if and only if AC and B are independent.

1.5.17 (The game of craps) The game of craps is played by rolling two fair, six-sided dice. On the first roll, if the sum of the two numbers showing equals 2, 3, or 12, then the player immediately loses. If the sum equals 7 or 11, then the player immediately wins. If the sum equals any other value, then this value becomes the player’s “point.” The player then repeatedly rolls the two dice, until such time as he or she either rolls the point value again (in which case he or she wins) or rolls a 7 (in which case he or she loses).
Suppose the player’s point is equal to 4. Conditional on this, what is the conditional probability that he or she will win (i.e., will roll another 4 before rolling a 7)? (Hint: The final roll will be either a 4 or 7; what is the conditional probability that it is a 4?)
For 2 ≤ i ≤ 12, let pi be the conditional probability that the player will win, conditional on having rolled i on the first roll. Compute pi for all i with 2 ≤ i ≤ 12. (Hint: You’ve already done this for i = 4 in part (b). Also, the cases i = 2, 3, 7, 11, 12 are trivial. The other cases are similar to the i = 4 case.)
Compute the overall probability that a player will win at craps. (Hint: Use part (b) and Theorem 1.5.1.)

1.5.18 (The Monty Hall problem) Suppose there are three doors, labeled A, B, and C. A new car is behind one of the three doors, but you don’t know which. You select one of the doors, say, door A. The host then opens one of doors B or C, as follows: If the car is behind B, then they open C; if the car is behind C, then they open B; if the car is behind A, then they open either B or C with probability 1/2 each. (In any case, the door opened by the host will not have the car behind it.) The host then gives you the option of either sticking with your original door choice (i.e., A), or switching to the remaining unopened door (i.e., whichever of B or C the host did not open). You then win (i.e., get to keep the car) if and only if the car is behind your final door selection. (Source: Parade Magazine, “Ask Marilyn” column, September 9, 1990.) Suppose for definiteness that the host opens door B.
If you stick with your original choice (i.e., door A), conditional on the host having opened door B, then what is your probability of winning? (Hint: First condition on the true location of the car. Then use Theorem 1.5.2.)
If you switch to the remaining door (i.e., door C), conditional on the host having opened door B, then what is your probability of winning?
Do you find the result of parts (a) and (b) surprising? How could you design a physical experiment to verify the result?
Suppose we change the rules so that, if you originally chose A and the car was indeed behind A, then the host always opens door B. How would the answers to parts (a) and (b) change in this case?
Suppose we change the rules so that, if you originally chose A, then the host always opens door B no matter where the car is. We then condition on the fact that door B happened not to have a car behind it. How would the answers to parts (a) and (b) change in this case?

1.6.2 Consider the uniform distribution on [0, 1]. Compute (with proof)
lim┬(n→∞)⁡〖P([1/4,1-e^(-n) ])〗

1.6.10 Let P be some probability measure on sample space S = [0, 1].
Prove that we must have
lim┬(n→∞)⁡〖P(0,1/n)=0〗
Show by example that we might have
lim┬(n→∞)⁡〖P([0,1/n))>0〗

Self-study Part:
1.2.14 Suppose S is a finite or countable set. Is it possible that P({s}) = 0 for every single s ∈ S? Why or why not?
1.2.15 Suppose S is an uncountable set. Is it possible that P({s}) = 0 for every single s ∈ S? Why or why not?
1.2.16 Does the additivity property make sense intuitively? Why or why not?

 

 

1.2.17 Is it important that we always have P(S) = 1? How would probability theory change if this were not the case?

1.3.11 Of the various theorems presented in this section, which ones do you think are the most important? Which ones do you think are the least important? Explain the reasons for your choices.

1.4.15 Suppose we roll eight fair six-sided dice. What is the probability that the sum of the eight dice is equal to 9? What is the probability that the sum of the eight dice is equal to 10? What is the probability that the sum of the eight dice is equal to 11?

1.4.17 Suppose we roll 10 fair six-sided dice. What is the probability that there are exactly two 2’s showing and exactly three 3’s showing?

1.4.20 Suppose we roll two fair six-sided dice and flip 12 coins. What is the probability that the number of heads is equal to the sum of the numbers showing on the two dice?

1.5.3 Suppose we flip three fair coins.
(a) What is the probability that all three coins are heads?
(b) What is the conditional probability that all three coins are heads, conditional on knowing that the number of heads is odd?
(c) What is the conditional probability that all three coins are heads, given that the number of heads is even?

 

1.5.8 Suppose the probability of snow is 20%, and the probability of a traffic accident is 10%. Suppose further that the conditional probability of an accident, given that it snows, is 40%. What is the conditional probability that it snows, given that there is an accident?

 

1.5.9 Suppose we roll two fair six-sided dice, one red and one blue. Let A be the event that the two dice show the same value. Let B be the event that the sum of the two dice is equal to 12. Let C be the event that the red die shows 4. Let D be the event that the blue die shows 4.
(a) Are A and B independent?
(b) Are A and C independent?
(c) Are A and D independent?
(d) Are C and D independent?
(e) Are A, C, and D all independent?

 

1.5.11 Suppose we roll a fair six-sided die and then flip a number of fair coins equal to the number showing on the die. (For example, if the die shows 4, then we flip 4 coins.)
(a) What is the probability that the number of heads equals 3?
(b) Conditional on knowing that the number of heads equals 3, what is the conditional probability that the die showed the number 5?

 

1.5.12 Suppose we roll a fair six-sided die and then pick a number of cards from a well-shuffled deck equal to the number showing on the die. (For example, if the die shows 4, then we pick 4 cards.)
(a) What is the probability that the number of jacks in our hand equals 2?
(b) Conditional on knowing that the number of jacks in our hand equals 2, what is the conditional probability that the die showed the number 3?

1.5.15 Let A and B be events of positive probability. Prove that P(A | B) > P(A) if and only if P(B | A) > P(B).

1.5.19 Suppose two people each flip a fair coin simultaneously. Will the results of the two flips usually be independent? Under what sorts of circumstances might they not be independent? (List as many such circumstances as you can.)

1.5.21 The Monty Hall problem (Challenge 1.5.18) was originally presented by Marilyn von Savant, writing in the “Ask Marilyn” column of Parade Magazine. She gave the correct answer. However, many people (including some well-known mathematicians, plus many laypeople) wrote in to complain that her answer was incorrect. The controversy dragged on for months, with many letters and very strong language written by both sides (in the end, von Savant was vindicated). Part of the confusion lay in the assumptions being made, e.g., some people misinterpreted her question as that of the modified version of part (e) of Challenge 1.5.18. However, a lot of the confusion was simply due to mathematical errors and misunderstandings. (Source: Parade Magazine, “Ask Marilyn” column, September 9, 1990; December 2, 1990; February 17, 1991; July 7, 1991.)
(a) Does it surprise you that so many people, including well-known mathematicians, made errors in solving this problem? Why or why not?
(b) Does it surprise you that so many people, including many laypeople, cared so strongly about the answer to this problem? Why or why not?

1.6.4 Suppose P([0,8/(4+n)])=(2+e^(-n))/6 for all n = 1, 2, 3, . . . . What must P({0}) be?

1.6.5 Suppose P([0, 1]) = 1, but P([1/n, 1]) = 0 for all n = 1, 2, 3, . . . . What must P({0}) be?
1.6.6 Suppose P([1/n, 1/2]) ≤ 1/3 for all n = 1, 2, 3, . . . .
(a) Must we have P((0, 1/2]) ≤ 1/3?
(b) Must we have P([0, 1/2]) ≤ 1/3?
1.6.8 Suppose P((0, 1/2]) = 1/3. Prove that there is some n such that P([1/n, 1/2]) > 1/4.
1.6.9 Suppose P([0, 1/2]) = 1/3. Must there be some n such that P([1/n, 1/2]) > 1/4?

 

1.6.11 Suppose we know that P is finitely additive, but we do not know that it is countably additive. In other words, we know that P(A1 ∪• • •∪ An) = P(A1) + • • • + P(An) for any finite collection of disjoint events {A1, . . . , An}, but we do not know
about P(A1 ∪ A2 ∪ • • • ) for infinite collections of disjoint events. Suppose further that we know that P is continuous in the sense of Theorem 1.6.1. Using this, give a proof that P must be countably additive. (In effect, you are proving that continuity of P is equivalent to countable additivity of P, at least once we know that P is finitely additive.)

1.6.3 Suppose that S = {1, 2, 3, . . .} is the set of all positive integers and that P is some probability measure on S. Prove that we must have
lim┬(n→∞)⁡〖P({1,2,…,n})=1〗

1.6.7 Suppose P([0, ∞)) = 1. Prove that there is some n such that P([0, n]) > 0.9.

Chapter 2 (Discrete Distributions)
Book: Michael J. Evans and Jeffrey S. Rosenthal, “Probability and Statistics – The Science of Uncertainty”

Homework Part:
Answer the following questions.

Self-study Part:
2.1.5 Let A and B be events, and let X = IA • IB. Is X an indicator function? If yes, then of what event?

2.1.10 Let X be a random variable.
(a) Is it necessarily true that X ≥ 0?
(b) Is it necessarily true that there is some real number c such that X + c ≥ 0?
(c) Suppose the sample space S is finite. Then is it necessarily true that there is some real number c such that X + c ≥ 0?

2.2.5 Suppose that a bowl contains 100 chips: 30 are labelled 1, 20 are labelled 2, and 50 are labelled 3. The chips are thoroughly mixed, a chip is drawn, and the number X on the chip is noted.
(a) Compute P(X = x) for every real number x.
(b) Suppose the first chip is replaced, a second chip is drawn, and the number Y on the chip noted. Compute P(Y = y) for every real number y.
(c) Compute P(W = w) for every real number w when W = X + Y.

2.2.8 Suppose that a bowl contains 10 chips, each uniquely numbered 0 through 9. The chips are thoroughly mixed, one is drawn and the number on it, X1, is noted. This chip is then replaced in the bowl. A second chip is drawn and the number on it, X2, is noted. Compute P(W = w) for every real number w when W = X1 + 10X2.

2.1.11 Suppose the sample space S is finite. Is it possible to define an unbounded random variable on S? Why or why not?

2.1.12 Suppose X is a random variable that takes only the values 0 or 1. Must X be an indicator function? Explain.
2.1.13 Suppose the sample space S is finite, of size m. How many different indicator functions can be defined on S?

2.2.10 Suppose Alice flips three fair coins, and let X be the number of heads showing. Suppose Barbara flips five fair coins, and let Y be the number of heads showing. Let Z = X − Y. Compute P(Z = z) for every real number z.

2.3.14 Suppose that a symmetrical die is rolled 20 independent times, and each time we record whether or not the event {2, 3, 5, 6} has occurred.
What is the distribution of the number of times this event occurs in 20 rolls?
Calculate the probability that the event occurs five times.

2.3.16 An urn contains 4 black balls and 5 white balls. After a thorough mixing, a ball is drawn from the urn, its color is noted, and the ball is returned to the urn.
(a) What is the probability that 5 black balls are observed in 15 such draws?
(b) What is the probability that 15 draws are required until the first black ball is observed?
(c) What is the probability that 15 draws are required until the fifth black ball is observed?

 

2.3.17 An urn contains 4 black balls and 5 white balls. After a thorough mixing, a ball is drawn from the urn, its color is noted, and the ball is set aside. The remaining balls are then mixed and a second ball is drawn.
(a) What is the probability distribution of the number of black balls observed?
(b) What is the probability distribution of the number of white balls observed?

2.3.23 Let X be a discrete random variable with probability function pX (x) = 2−x for
x = 1, 2, 3, . . . , with pX (x) = 0 otherwise.
(a) Let Y = X2. What is the probability function pY of Y?
(b) Let Z = X − 1. What is the distribution of Z? (Identify the distribution by name and specify all parameter values.)

 

2.3.24 Let X ∼ Binomial(n1, θ ) and Y ∼ Binomial(n2, θ ), with X and Y chosen independently. Let Z = X + Y. What will be the distribution of Z? (Explain your reasoning.) (Hint: See the end of Example 2.3.3.)

2.3.25 Let X ∼ Geometric(θ ) and Y ∼ Geometric(θ ), with X and Y chosen independently. Let Z = X + Y. What will be the distribution of Z? Generalize this to r coins. (Explain your reasoning.)

Chapter 2 (Continuous Distributions)
Book: Michael J. Evans and Jeffrey S. Rosenthal, “Probability and Statistics – The Science of Uncertainty”

Homework Part:

Self-study Part:
2.4.5 Is the function defined by f(x)=x/3 for -1<x<2 and 0 otherwise, a density? Why or why not?
2.4.5 This is not a density because it takes negative values

2.4.7 Let M > 0, and suppose f(x)=cx^2 for 0 < x < M, otherwise f(x)=0. For what value of c (depending on M) is f a density?

2.4.8 Suppose X has density f and that f(x) ≥ 2 for 0.3<x<0.4. Prove that P(0.3<X<0.4)≥0.2.

2.4.9 Suppose X has density f and Y has density g. Suppose f(x)>g(x) for 1<x<2. Prove that P(1<X<2)>P(1<Y<2).

2.4.16 Use the fact that Γ(1/2) = √π to give an alternate proof that ∫_(-∞)^∞▒ϕ(x)dx=1 (as in Theorem 2.4.2). (Hint: Make the substitution t=x^2/2.)

2.4.17 Let f be the density of the Gamma(α,λ) distribution, as in (2.4.8). Prove that ∫_0^∞▒〖f(x)dx=1〗. (Hint: Let t = λx.)

2.4.18 (Logistic distribution) Consider the function given by f(x)=e^(-x) (1+e^(-x) )^(-2) for -∞<x<∞. Prove that f is a density function.

2.4.19 (Weibull(α) distribution) Consider, for α>0 fixed, the function given by f(x)=αx^(α-1) e^(-x^α ) for 0<x<∞ and 0 otherwise. Prove that f is a density function.

2.4.20 (Pareto(α) distribution) Consider, for α>0 fixed, the function given by f(x)=α(1+x)^(-α-1) for 0<x<∞ and 0 otherwise. Prove that f is a density function.

2.4.21 (Cauchy distribution) Consider the function given by f(x)=1/π 1/(1+x^2 ) for -∞<x<∞. Prove that f is a density function. (Hint: Recall the derivative of arctan(x) .)

2.4.22 (Laplace distribution) Consider the function given by f(x)= e^(-|x| )/2 for -∞<x<∞ and 0 otherwise. Prove that f is a density function.

2.4.23 (Extreme value distribution) Consider the function given by f(x)=e^(-x) exp⁡{ -e^(-x)} for -∞<x<∞ and 0 otherwise. Prove that f is a density function.

2.4.24 (Beta(a, b) distribution) The beta function is the function B : (0,∞)^2→R^1 given by B(a,b)=∫_0^1▒〖x^(a-1) (1-x)^(b-1) dx〗. It can be proved (see Challenge 2.4.25) that B(a,b)=(Γ(a)Γ(b))/(Γ(a+b)) (2.4.10)
(a) Prove that the function f given by f(x)=B^(-1) (a,b) x^(a-1) (1-x)^(b-1), for 0<x<1 and 0 otherwise, is a density function.
(b) Determine and plot the density when a = 1, b = 1. Can you name this distribution?
(c) Determine and plot the density when a = 2, b = 1.
(d) Determine and plot the density when a = 1, b = 2.
(e) Determine and plot the density when a = 2, b = 2.

2.4.25 Prove (2.4.10). (Hint: Use Γ(a)Γ(b)=∫_0^∞▒∫_0^∞▒〖x^(a-1) y^(b-1) e^(-x-y) dxdy〗 and make the change of variable u=x+y,v=x/u.)

2.4.26 Suppose X∼N(0,1) and Y∼N(0,4). Which do you think is larger, P(X>2) or P(Y>2)? Why? (Hint: Look at Figure 2.4.5.)

2.5.2 Consider rolling one fair six-sided die, so that S = {1, 2, 3, 4, 5, 6}, and P(s) = 1/6 for all s ∈ S. Let X be the number showing on the die, so that X(s) = s for s ∈ S. Let Y = X2. Compute the cumulative distribution function F_Y (y)=P(Y≤y), for all y ∈ R1. Verify explicitly that properties (a) through (d) of Theorem 2.5.2 are satisfied by this function F_Y.

2.5.8 Suppose F_Y (y)=y^3 for 0≤y<1/2, and F_Y (y)=1-y^3 for 1/2≤y≤1. Compute each of the following.
(a) P(1/3 < Y < 3/4)
(b) P(Y = 1/3)
(c) P(Y = 1/2)

2.5.17 Let F be a cumulative distribution function. For x ∈ R1, we could define F(x^+) by F(x^+ )=lim┬(n→∞)⁡F(x+1/n). Prove that F is right continuous, meaning that for each x ∈ R1, we have F(x^+ )=F(x). (Hint: You will need to use continuity of P (Theorem 1.6.1).)

2.6.2 Let X∼Uniform[L,R]. Let Y=cX+d, where c<0. Prove that Y∼Uniform[cR+d,cL+d]. (In particular, if L = 0 and R = 1 and c = −1 and d = 1, then X∼Uniform[0,1] and also Y=1-X∼Uniform[0,1].)

2.6.4 Let X ∼ Exponential(λ). Let Y = cX, where c > 0. Prove that Y ∼ Exponential(λ/c).

2.6.9 Let X have density function f_X (x) = x^3/4 for 0 < x < 2, otherwise f_X (x) = 0.
(a) Let Y=X^2. Compute the density function f_Y (y) for Y.
(b) Let Z=√X. Compute the density function f_Z (z) for Z.

2.6.10 Let X∼Uniform[0,π/2]. Let Y=sin(X). Compute the density function f_Y (y) for Y.

2.6.11 Let X have density function f_X (x)=(1/2)sin(x) for 0<x<π, otherwise f_X (x)=0. Let Y=X^2. Compute the density function f_Y (y) for Y.

2.6.17 (Log-normal(τ) distribution) Suppose that X∼N(0,τ^2). Prove that Y=e^X has density f_τ (y)=1/(√2π τ) exp⁡(-(ln⁡y )^2/(2τ^2 )) 1/y for y>0 and where τ>0 is unknown. We say that Y∼Log-normal(τ) .

2.6.21 Theorems 2.6.2 and 2.6.3 require that h be an increasing or decreasing function, at least at places where the density of X is positive (see Theorem 2.6.4). Suppose now that X∼N(0,1) and Y=h(X), where h(x) = x^2. Then f_X (x)>0 for all x, while h is increasing only for x>0 and decreasing only for x<0. Hence, Theorems 2.6.2 and 2.6.3 do not directly apply. Compute f_Y (y) anyway. (Hint: P(a≤Y≤b)=P(a≤Y≤b,X>0)+P(a≤Y≤b,X<0).)

2.7.4 For each of the following joint density functions f_(X,Y), find the value of C and compute f_X (x),f_Y (y), and P(X≤0.8 ,Y≤0.6).
(a) f_(X,Y) (x,y)={■(2x^2 y + Cy^5&0≤x≤1,0≤y≤1@0&Otherwise)┤
(b) f_(X,Y) (x,y)={■(C(xy + x^5 y^5)&0≤x≤1,0≤y≤1@0&Otherwise)┤
(c) f_(X,Y) (x,y)={■(C(xy + x^5 y^5)&0≤x≤4,0≤y≤10@0&Otherwise)┤
(d) f_(X,Y) (x,y)={■(Cx^5 y^5&0≤x≤4 ,0≤y≤10@0&Otherwise)┤

2.7.8 Let X and Y have joint density f_(X,Y) (x,y)=(x^2+y)/36 for -2<x<1 and 0<y<4, otherwise f_(X,Y) (x,y)=0. Compute each of the following.
(a) The marginal density f_X (x) for all x∈R^1
(b) The marginal density f_Y (y) for all y∈R^1
(c) P(Y<1)
(d) The joint cdf F_(X,Y) (x,y) for all x,y∈R^1

 

 

2.7.12 Let F_(X,Y) be a joint cdf. Prove that for all y∈R^1, lim┬(x→-∞)⁡〖F_(X,Y) (x,y)=0〗.

2.7.13 Let X and Y have the Bivariate Normal(μ_1,μ_2,σ_1,σ_2,ρ) distribution, as in Example 2.7.9. Prove that X∼N(μ_1,σ_1^2), by proving that ∫_(-∞)^∞▒〖f_(X,Y) (x,y)dy=1/(σ_1 √2π) exp⁡{-(x-μ_1 )^2/(2σ_1^2 )} 〗 .

2.7.14 Suppose that the joint density f_(X,Y) is given by f_(X,Y) (x,y)=Cye^(-xy) for 0<x<1,0<y<1 and is 0 otherwise.
(a) Determine C so that f_(X,Y) is a density.
(b) Compute P (1/2<X<1,1/2<Y<1) .
(c) Compute the marginal densities of X and Y.

2.7.17 (Dirichlet(α_1,α_2,α_3) distribution) Let (X_1,X_2) have the joint density f_(X_1,X_2 ) (x_1,x_2)=Γ(α_1+α_2+ α_3 )/( Γ(α_1 )Γ(α_2 )Γ(α_3 ) ) x_1^(α_1-1) x_2^(α_2-1) (1-x_1-x_2 )^(α_3-1) for x_1≥0,x_2≥0, and 0≤x_1+x_2≤1. A Dirichlet distribution is often applicable when X_1,X_2, and 1-X_1-X_2 correspond to random proportions.
(a) Prove that f_(X_1,X_2 ) is a density. (Hint: Sketch the region where f_(X_1,X_2 ) is nonnegative, integrate out x_1 first by making the transformation u= x_1/(1-x_2) in this integral, and use (2.4.10) from Problem 2.4.24.)

2.7.18 (Dirichlet(α_1 ,…,α_(k+1)) distribution) Let (X_1,…,X_k) have the joint density f_(X_1,…,X_k ) (x_1,…,x_k )=(Γ(α_1 + • • • + α_(k+1)))/( Γ(α_1 )•••Γ(α_(k+1))) x_1^(α_1-1)…x_k^(α_k-1) (1-x_1-… -x_k )^(α_(k+1)-1) for x_i≥0,i=1,…,k, and 0≤x_1+ •••+ x_k≤1. Prove that f_(X_1,…,X_k ) is a density. (Hint: Problem 2.7.17.)

and this establishes that fX1,…,Xk is a density
2.7.19 Find an example of two random variables X and Y and a function h: R^1→R^1, such that F_X (x)>0 and F_Y (x)>0 for all x∈R^1, but lim┬(x→∞)⁡〖F_(X,Y) (x,h(x))=0〗.

2.7.20 What are examples of pairs of real-life random quantities that have interesting relationships? (List as many as you can, and describe each relationship as well as you can.)

2.8.4 Suppose X and Y have joint density function
f_(X,Y) (x,y)={■((3+e^x+3y+3ye^y+ye^x+ye^(x+y))&0≤x≤1,0≤y≤1@0&Otherwise)┤
(a) Compute f_X (x) for all x∈R^1.
(b) Compute f_Y (y) for all y∈R^1.
(c) Determine whether or not X and Y are independent.

2.8.6 Let X∼Bernoulli(θ) and Y∼Geometric(θ), with X and Y independent. Let Z=X+Y. What is the probability function of Z?

2.8.8 Let X and Y be jointly absolutely continuous random variables. Suppose X∼Exponential(2) and that P(Y>5│X=x)=e^(-3x). Compute P(Y>5).

2.8.22 Suppose that (X_1,X_2,X_3)∼Multinomial(n,θ_1,θ_2,θ_3). Prove, by summing the joint probability function, that X_1∼Binomial(n,θ_1) .

2.8.27 Suppose that (X,Y)∼BivariateNormal(μ_1,μ_2,σ_1,σ_2,ρ) . Prove that Y given X=x is distributed N(μ_2 +ρσ_2 (x – μ_1)/σ_1,(1-ρ^2 ) σ_2^2). Establish the analogous result for the conditional distribution of X given Y=y. (Hint: Use (2.7.1) for Y given X=x and its analog for X given Y=y.)

2.8.28 Let X and Y be random variables.
(a) Suppose X and Y are both discrete. Prove that X and Y are independent if and only if P(Y=y | X=x)= P(Y=y) for all x and y such that P(X=x)>0.
(b) Suppose X and Y are jointly absolutely continuous. Prove that X and Y are independent if and only if P(a≤Y≤ b | X=x)=P(a≤Y≤b) for all x and y such that f_X (x)>0.

2.9.2 Let X∼Exponential(3) and Y∼Uniform[1,4], with X and Y independent. Let Z=X+Y and W=X-Y.
(a) Write down the joint density f_(X,Y) (x,y) of X and Y. (Be sure to consider the ranges of valid x and y values.)
(b) Find a two-dimensional function h such that (Z,W) = h(X,Y).
(c) Find a two-dimensional function h-1 such that (X,Y)=h^(-1) (Z,W).
(d) Compute the joint density f_(Z,W) (z,w) of Z and W. (Again, be sure to consider the ranges of valid z and w values.)

2.9.13 Let X and Y be independent, with X∼Negative-Binomial(r_1,θ) and Y∼Negative-Binomial(r_2,θ). Let Z=X+Y. Use Theorem 2.9.3(a) to prove that Z∼Negative-Binomial(r_1+r_2,θ).

2.9.15 Let X and Y be independent, with X∼Gamma(α_1,λ) and Y∼Gamma(α_2,λ). Let Z=X+Y. Use Theorem 2.9.3(b) to prove that Z∼Gamma(α_1+α_2,λ).

2.9.16 (MV) Show that when Z_1,Z_2 are i.i.d. N(0,1) and X,Y are given by (2.7.1), then (X,Y)∼Bivariate Normal(μ_1,μ_2,σ_1,σ_2,ρ) .

2.10.5 Let U_1∼Uniform[0,1] and U_2∼Uniform[0,1] be independent, and let X=c_1 √(log(1/U_1 ) ) cos(2πU_2) + c2. Find values of c_1 and c_2 such that X∼N(5,9).

2.10.14 Find the inverse cdf of the Weibull(α) distribution of Problem 2.4.19. (Hint: See Problem 2.5.21.)

2.10.15 Find the inverse cdf of the Pareto(α) distribution of Problem 2.4.20. (Hint: See Problem 2.5.22.)

2.10.16 Find the inverse cdf of the Cauchy distribution of Problem 2.4.21. (Hint: See Problem 2.5.23.)

2.10.17 Find the inverse cdf of the Laplace distribution of Problem 2.4.22. (Hint: See Problem 2.5.24.)

2.10.18 Find the inverse cdf of the extreme value distribution of Problem 2.4.23. (Hint: See Problem 2.5.25.)

2.10.19 Find the inverse cdfs of the beta distributions in Problem 2.4.24(b) through (d). (Hint: See Problem 2.5.26.)

2.10.20 (Method of composition) If we generate X∼f_X obtaining x, and then generate Y from f_(Y│X) (•|x), prove that Y∼f_Y.

2.10.21 (Rejection sampling) Suppose f is a complicated density function. Suppose g is a density function from which it is easy to sample (e.g., the density of a uniform or exponential or normal distribution). Suppose we know a value of c such that f(x)≤cg(x) for all x∈R^1. The following provides a method, called rejection sampling, for sampling from a complicated density f by using a simpler density g, provided only that we know f(x)≤cg(x) for all x∈R^1.
(a) Suppose Y has density g. Let U∼Uniform[0,c], with U and Y independent. Prove that P(a≤Y≤b | f(Y)≥Ucg(Y))=∫_a^b▒f(x)dx. (Hint: Use Theorem 2.8.1 to show that P(a≤Y≤b,f(Y)≥cUg(Y))=∫_a^b▒〖g(y)P(f(Y)≥cUg(Y)│Y=y)dy〗.)
(b) Suppose that Y1,Y2,… are i.i.d., each with density g, and independently U_1,U_2,… are i.i.d. Uniform[0,c]. Let i_0=0, and for n≥1, let i_n=min{j > i_(n-1) ∶ U_j f(Yj)≥cg(Yj)}. Prove that X_(i_1 ) ,X_(i_2 ),… are i.i.d., each with density f. (Hint: Prove this for X_(i_1 ),X_(i_2 ).)

 

 

 

 

Sample Solution

Aristotle Education and Plato Through the term of Aristotle, one would think about how an insignificant idea of logic could affect the manner in which instruction is drilled today as we probably am aware it. Aristotle’s lifestyle mirrored the manner in which he thought and what he composed for individuals to see and teach upon today. He has numerous methods of insight that are carried directly into the classroom today without anybody realizing they are. His methods of insight are genuinely astounding. At the point when an individual makes something or shows something, the methods of insight got the classroom turn out to be innate to the point that individuals who use it don’t realize it exists. Authenticity is an instructive rationality, which stresses learning that creates from one’s own faculties. Under this logic the thought exists that there is a genuine world not developed by human personalities, that can be known by one’s very own brain. It is through encountering the world around everybody in which one takes in the core values and social lead of life. The truth is the thing that one encounters in the physical world. In this way, all that one can take in and know originates from encountering our general surroundings. Aristotle is considered by most to be one of the best agnostic scholars. He was conceived in a Grecian state at Stagira, 384 B.C.E. During childbirth he was naturally introduced to a set life. His dad, Nicomachus, had a situation under the King Amyntas of Macedonia as court doctor. Subsequently, this could identify with how his training began off. It was believed that his predecessors held a similar position under the King since along these lines the territory of court doctor could wind up inherited. As doctor, Aristotle was educated in the zone of prescriptions and was additionally prepared for the situation of court doctor. It was here that he was obviously instructed with a creating psyche to engage the numerous inquiries that emerged in his mind and the heading he would take to answer them. It is likewise certain that with each time Aristotle went starting with one place then onto the next, it had a type of effect on him: his reasoning, his compositions, and how logic is seen today. With each place he made a trip to, he had the capacity to pick up, offer, educate, and encounter the learning of theory. It was from when he was eighteen till he was around thirty-seven that he considered under the direction of Plato as his student in Athens. He was held as a recognized understudy among the gathering that considered with him in the Grove of Academus. The main issue that appeared to emerge in his long stretches of study was his connection with his instructor. Presently these examples are not clear but rather it is realized that both Aristotle and Plato had each their own thoughts regarding certain perspectives and rationalities. In this way, it is nature for them to knock heads a little in contentions about whether either side was legitimate with their thoughts, convictions, as well as perspectives. There was still no motivation to trust that the two did not have any frame a companionship, since they both had such high perspectives toward life. Legend reflected inadequately and negatively upon Aristotle however legend has not been seen that route as it is today. Yet, it was appeared after Plato’s demise in 347 B.C.E. that Aristotle still held Plato in high regards. He never gave any absence of sincere thankfulness to him, when all individuals anticipated that him should do once he passed on. The passing of somebody vital in his life likely additionally influenced the manner in which he contemplated certain thoughts. After his educator’s passing, Aristotle went to Atarneus in Asia Minor where he met with the ruler, Hermias. There he would be hitched to Hermias’ received little girl Pythias. This may not appear to be applicable to how it impacted authenticity in training, yet if one somehow happened to consider it, in what capacity can marriage not change the manner in which somebody supposes in a type of way? A couple of years passed, Hermias was killed because of disobedience and King Philip II of Macedon called upon Aristotle to come back to Stagira. It was here that he would turn into the coach of Alexander the Great, who was just thirteen years of age. This greatly affected history, as individuals know it. Aristotle showed him the learning of morals and governmental issues, and in addition numerous mysteries of theory in which numerous individuals likely would experience difficulty understanding. Alexander the Great benefitted from the learning passed on from Aristotle alongside Aristotle impacting the brain of the youthful ruler to his advantage, and that is the way history was influenced by this contact between these two individuals. When Alexander took the royal position, Aristotle came back to Athens and there opened a school of logic. Later he followed in the strides of his educator, Plato. He framed a school, Lyceum, in an exercise room, where he gave ordinary guidance in logic. It was here that for a long time (335-322B.C.E.) as an instructor at the Lyceum, he thought of the more prominent number of his works. He thought of “exchanges”, which were compositions that Aristotle habitually composed that are still perused today and were then by his students. When instructing at the Lyceum, Aristotle had a propensity for strolling about as he educated. It was regarding this that his adherents ended up referred to in later years as the peripatetics, signifying, “to stroll about.” Besides, he created the few treatises on material science, mysticism, etc, in which the composition is a dialect more specialized than in the “discoursed”. These works indicate the amount of an extraordinary impact they have, for example, the manner in which they affected Alexander whom later wound up known as Alexander the Great. They appear specifically how he prevailing with regards to uniting crafted by his ancestors in Greek logic, and how he saved neither agonies nor cost in seeking after, either actually or through others, his examinations in the domain of common Phenomena. At the point when Alexander’s passing wound up known at Athens, and the flare-up happened which prompted the Lamian war; Aristotle was obliged to partake in the general disagreeability of the Macedonians. The charge of iconoclasm, which had been brought against Anaxagoras and Socrates, was presently, with even less reason, brought against him. He left the city, saying (as indicated by numerous antiquated specialists) that he would not allow the Athenians to sin a third time against Philosophy. He took up his living arrangement at his nation house, at Chalcis, in Euboea, and there he kicked the bucket the next year, 322B.C.E. His demise was because of an infection from which he had since quite a while ago endured. The story that his passing was because of hemlock harming, and also the legend, saying they he dedicated himself completely to the ocean are totally without authentic establishment. There are various ways that the speculations, methods of insight, morals, compositions, and styles of instructing of Aristotle have affected training today and in all likelihood will keep on later on. Aristotle accepted emphatically in the significance of a training that reviews this present reality and after that makes inferences and increases learning through systematic activities. With for all intents and purposes everything that is done today and showed today, there is some significant connection to that of Aristotle and his convictions. Through a portion of Aristotle’s books of Politics, one can perceive how training could be impacted and influenced by what Aristotle says in his works. Aristotle’s moral hypothesis is communicated through numerous perspectives. Aristotle will in general express his inclination towards ethicalness in a way where it can go two different ways. He discusses how righteousness is separated into good and scholarly ideals. Greatness of character manages the “great life” and bliss. Individuals are worried about their character and getting the brilliant mean, which is genuine bliss, throughout everyday life. One whom instructs would be influenced by this brilliant mean since they should figure out how to stray far from this viewpoint. They need to figure out how to teach for the sole motivation behind the individuals who are being instructed to flourish regarding what they are being educated. As it were, all these are interrelated with one another. Aristotle additionally clarifies the connection among morals and legislative issues, which prompts the suggestion for nature of ethical quality and well living. Temperance, to Aristotle, is deciphered as the brilliance of a question and that the protest will play out it’s capacity adequately. This goes for individuals too. For instance an “idealistic” instructor will effectively show their understudies data they have to understand so as to go ahead with their training. Aristotle isolates human excellence into two sorts. One is moral temperance and the other is scholarly righteousness. In spite of the fact that, it is difficult to give a correct meaning of each kind, one would trust that an instructor of today would lean toward the more good highminded side. Excellence is likewise a condition of character that is worried about decision with the brilliant mean. This prompts talking about the mean as indicated by Aristotle. Individuals who are ethically upright are continually settling on their choices as indicated by the brilliant mean. Obviously not every person is the equivalent, distinctive individuals have diverse means. This achieves the point that the great life is an actual existence of joy. Aristotle says such an actual existence can be accomplished by perfection in the two territories of ethicalness, however individuals are on the whole going for some kind of good throughout everyday life. Some fair may have higher desires in their objective. Individuals with virtual brilliance need to have the great life that, as indicated by them, is the magnificence of character. The great life is alluded to as being content with life. Satisfaction must have two ideas included to accommodate Aristotle’s definition. Somebody must exercise his or her idea of reason. He calls this “action of soul.” Happiness additionally should have quality in the execution of the uprightness, and it is the main objective that everybody wishes to achieve. Aristotle contended that the objective of individuals is satisfaction, and that we accomplish bliss when we satisfy our capacity, or reason forever. Along these lines, it is important to figure out what our capacity is. The capacity of a thing is the thing that it can alone do, or what it can do best. This here is a key point in which an instructor must get it. This key p

This question has been answered.

Get Answer
WeCreativez WhatsApp Support
Our customer support team is here to answer your questions. Ask us anything!
👋 Hi, Welcome to Compliant Papers.