tut12s

download tut12s

of 3

Transcript of tut12s

  • 8/13/2019 tut12s

    1/3

    The University of Sydney

    School of Mathematics and Statistics

    Solutions to Tutorial 12 (Week 13)

    MATH3969: Measure Theory and Fourier Analysis (Advanced) Semester 2, 2009

    Web Page: http://www.maths.usyd.edu.au/u/UG/SM/MATH3969/

    Lecturer: Daniel Daners

    Questions to complete during the tutorial

    1. Let X : R be a random variable on the probability space ( , A , P ).(a) Prove that

    P [|X | ] 1 p [|X | ] |X | p dP

    1 p

    E [|X | p]

    for all > 0 and 1 p < .Solution: Note that for [|X | ] = { : |X ()| } we have 1 | X ()| p/ p.Hence

    P [|X | ] = [|X | ]

    1 dP [|X | ]

    |X | p

    p dP

    = 1 p [|X | ] |X | p dP

    1 p |X | p dP =

    1 p

    E [|X | p].

    (b) Prove Chebychevs inequality

    P [|X | ] 1 2

    Var( X )

    for all > 0, where := E

    [X ].Solution: By the previous part applied to X and p = 2 we get

    P [|X | ] 12

    E [|X |2] = 1 2

    Var( X )

    as claimed.

    2. Let (, A, P ) be a probability space and A 0 a -algebra with A 0 A . Let X L1(, A, P ) bea random variable.

    (a) Let : R R a convex function. If X L1(, A , P ), prove that E [X |A 0] E [ X |A 0]. (This generalises Jensens inequality.)Solution: If : R R is convex, then according to lectures we can write

    (s) (t) m(t)(s t)

    for all s, t R if we set

    m(t) = sups

    X }. By denition of conditional expectation A A 0 and

    X P (A) A

    X 0 dP = A

    X dP = 0 .

    Copyright c 2009 The University of Sydney 1

    http://www.maths.usyd.edu.au/u/UG/SM/MATH3969/http://www.maths.usyd.edu.au/u/UG/SM/MATH3969/
  • 8/13/2019 tut12s

    2/3

    Hence X 0 X almost everywhere. Similarly we show that X 0 X almost everywhere,so that X 0 X . From the above we have

    X X 0 (m X 0)(X X 0).

    From the construction of conditional expectation we know that E [Y |A 0] 0 wheneverY 0. We also know that taking conditional expectation is a linear map. Hence

    E [ X |A 0] E [ X 0 |A 0] E [(m X 0)(X X 0)|A 0].

    Since X 0 is A0-measurable we have E [ X 0 |A 0] = X 0. Moreover, since X 0 L (, A0, P ) and m() is an increasing function m X 0 L (, A0, P ). By the propertiesof conditional expectation from lectures

    E [(m X 0)(X X 0)|A 0] = (m X 0)E [X X 0 |A 0]= ( m X 0) E [X |A 0] E [X 0 |A 0] = ( m X 0) X 0 X 0 = 0

    Putting everything together we get

    E [ X |A 0] X 0 = E [X |A 0]

    for all X L (, A, P ) as claimed. Assume now that X, X L1(, A , P ). Let

    An := { : |X 0()| < n }.

    From what we proved above we have

    E [ (1An X )|A 0] E [1An X |A 0] (1)

    Since An A 0 we have E [1An X |A 0] = 1An E [X |A 0] and by the continuity of we have

    E [1An X |A 0] = (1An E [X |A 0]) E [X |A 0]

    pointwise. Now clearly (1An X ()) = (X ()) if An and (1An X ()) = (0) if Acn and so

    (1An X ) = 1 An X + 1Acn (0).

    Since 1An , 1Acn L (, A0, P ) we have

    E [ (1An X )|A 0] = E [1An X |A 0] + E [1Acn (0) |A 0] = 1An E [ X |A 0] + 1Acn (0)

    for all n N . Note that An An +1 for all n N and set A := n N . Hence 1An 1Aand 1Acn 1Ac pointwise as n . Since X 0 L1(, A, P ) we have P (A) = 1 andtherefore

    E [ (1An X )|A 0] = 1An E [ X |A 0] + 1Acn (0) E [ X |A 0]

    almost everywhere (with probability one). Hence we get the required inequality almosteverywhere by passing to the limit in ( 1).

    (b) Use the above to show that the linear map X E [X |A 0] is continuous from L p(, A, P )to L p(, A0, P ) if 1 p < .Solution: Since t |t | p is convex for 1 p < we conclude from (a) that

    |E [X |A 0]| p E [|X | p|A 0]

    almost everywhere. Hence by denition of conditional expectation

    E [X |A 0] p =

    |E [X |A 0]| p dP 1/p

    E [|X | pA0]dP 1/p

    =

    |X | p dP 1/p

    = X p.

    By the linearity of the map X E [X |A 0] continuity follows.

    2

  • 8/13/2019 tut12s

    3/3

    3. Let (, A , P ) be a probability space and A0 a -algebra with A0 A . Then clearly L2(, A0, P )is a closed subspace of L2(, A , P ) and therefore, by the projection theorem in a Hilbert space,for every random variable X L2(, A, P ) there exists X 0 L2(, A0, P ) such that X X 0 isorthogonal to L2(, A0, P ). Prove that X 0 = E [X |A 0] almost everywhere.

    Solution: By the properties of conditional expectation proved in lectures

    (X X 0)Y dP = XY dP X 0Y dP = XY dP XY dP = 0For all Y L (, A0, P ). By density of L (, A0, P ) in L2(, A0, P ) it follows that X X 0is orthogonal to L2(, A0 , P ) as claimed.

    Extra questions for further practice4. Let = [0, 1] and P = m the Lebesgue measure. Then [0 , 1] is a probability space. Give

    examples of two distinct random variables which have the same distribution.

    Solution: Clearly X () := and Y () := 1 have the same distribution.

    5. Let be the space obtained by coin tossing countably many times. Such coin tosses can berepresented as innite sequences of the form := {(a0, a 1 , a 2, . . . ) : ak = 0 or 1}. A zero meanshead and a one means tail for instance. Such sequences can be interpreted as binary expansionsof the number =

    k=1

    a k2k which is between zero and one. With that identication we can set

    = [0, 1).(a) Denote by Ak := {(a0, a1 , . . . ) : ak = 1}. Using that = [0 , 1) sketch A1, A2 , A3 and

    then describe the sets Ak for general k. What is the probability of Ak , and how does itcompare to the Lebesgue measure of Ak?Solution: If a1 = 1, then = 12 +

    j =1

    a j2j

    12 , so A1 = [

    12 , 1). If k = 2, then =

    a12 +

    14 j =1 aj

    2j , so we have A2 = [ 14 , 12 ) [34 , 1) and similarly A3 = [ 18 , 14 ) [38 12 ) [58 , 34 ) [78 , 1).The Lebesgue measure of each of the sets is clearly 1 / 2. A sketch of A1, A2 and A3 is asfollows:

    0 1

    0 1

    0 1

    (b) Show that every interval of the form I n,j = [ j/ 2n , ( j + 1) / 2n ) ( j = 0 , . . . , 2n 1) can bewritten as a nite intersection of the sets Ak and their complements.Solution: First note that I n,j Ak or I n,j Ack for all k = 1 , . . . , n . Hence, for k =1, . . . , n we let B j,k := Ak if I n,j Ak and B j,k := Ack if I n,j A

    ck . Then I n,j =

    nk=1 B j,k .

    (c) Argue why the probability measure in the above situation is the Lebesgue measure on[0, 1).Solution: Every interval ( a, b) [0, 1) can be written as a disjoint union of countablymany intervals of the form [ j/ 2n , ( j + 1) / 2n ) and therefore the measure is equal to b a.This induces Lebesgue measure.

    3