Moment-Based Descriptors
Mean
- Expected value of random variable , is a measure of the central tendency of the distribution of . It is represented by , and is the value we expect to obtain "on average" if we continue to take observations of and average out the results. The mean of a discrete distribution with PMF is
- The mean of continuous random variable, with PDF is
- Computing by Julia
using QuadGK
sup = (-1,1)
f1(x) = 3/4*(1-x^2)
f2(x) = x < 0 ? x+1 : 1-x
expect(f,support) = quadgk(x -> x*f(x), support...)[1]
println("Mean 1: ", expect(f1,sup))
println("Mean 2: ", expect(f2,sup))
General Expectation and Moments
- In general, for a function and a random variable , we can consider the random variable . The distribution of will typically be different from the distribution of .
- A common case is , in which case we call , the -th moment of . Then, for a random variable with PDF , the moment of is
Variance
- The variance of a random variable , often denoted as or , is a measure of the spread, or dispersion, of the distribution of . It is defined by
- Therefore, for discrete:
- For continuous:
- Besides, the variance of can also be considered as the expectation of a new random variable, .
- Simulation by Julia
Higher Order Descriptors: Skewness and Kurtosis
- Take a random variable with and , then the skewness is defined as
- proof
- The skewness is a measure of the asymmetry of the distribution. For a distribution having a symmetric density function about the mean, we have . Otherwise, it is either positive or negative depending on the distribution being skewed to the right or skewed to the left.
- The kurtosis is defined as
- The kurtosis is a measure of the tails of the distribution. Any normal probability distribution has . Then, a probability distribution with a higher value of can be interpreted as having "heavier tails", while a probability distribution with a lower value is said to have "lighter tails".
- Excess kurtosis defined as
- and are invariant to changes in location and scale of the distribution.
Laws of Large Numbers
- Emperical averages converge to expected values. A sequence of independent and identically distributed random variables is considered. Then for each , we compute the sample mean.
- If the expected value of each of the random variables is , then a law of large numbers is a claim that the sequence converge to . The distinction between "weaks" and "strong" lies with the mode of converge.
- The wake law of large numbers claims that the sequence of probabilities. As the grows, the likelihood of the sample mean to be farther away than from the mean vanishes.
- The strong law of large numbers stats that