Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Probability-Statistics-Jupy…
GitHub Repository: Probability-Statistics-Jupyter-Notebook/probability-statistics-notebook
Path: blob/master/notebook-for-reviewing/chapter_7_statistical_estimate_and_sampling_distribution.ipynb
388 views
Kernel: Python 3

Chapter 7 Statistical Estimation and Sampling Distribution

7.1 Point Estimates

Parameters: used to discrabe a probability distribution, e.g. mean, variance, etc.

Statistics: a function of random sample of a distribution, i.e. randomly choose some samples from a distribution.

Estimate: guess the parameter by the statistics.

7.2 Properities of Point Estimate

Estimate bias: bias(θ^)=E(θ^)θbias(\hat{\theta}) = E(\hat{\theta})-\theta, if bias=0bias = 0, we call this estimate unbias estimate.

Estimate variance: Var(θ^)Var(\hat{\theta}) is the same with the definition of variance.

Mean squared error (MSE): MSE(θ^)=E(θ^θ)2=Var(θ^)+bias2(θ^)MSE(\hat{\theta})=E(\hat{\theta}-\theta)^2=Var(\hat{\theta}) + bias^2(\hat{\theta})

7.3 Sampling Distribution

XB(n,p)p^N(p,p(1p)n)X\sim B(n, p)\Longrightarrow \hat{p}\sim N(p, \frac{p(1-p)}{n})

XN(μ,σ)T=n(Xˉμ)Stn1X\sim N(\mu, \sigma)\Longrightarrow T = \frac{\sqrt{n}(\bar{X}-\mu)}{S}\sim t_{n-1}

7.4 Constructing Parameter Estimate

Moment estimate method

For parameters θ^=[θ1,θ2,,θn]\hat{\theta}=[\theta_1, \theta_2, \dots, \theta_n], we create

xˉ=E(X)\bar{x}=E(X)xˉ2=E(X2)\bar{x}^2=E(X^2)\dotsxˉn=E(Xn)\bar{x}^n=E(X^n)

Solve these functions we can get the θ^\hat{\theta}.

Maximum likehood estimate method

MLE(θ^)=maxθL(θ)=maxθf(x1;θ)×f(x2;θ)×f(xn;θ)MLE(\hat{\theta})=max_{\theta}L(\theta)=max_{\theta}f(x_1;\theta)\times f(x_2;\theta)\times \dots f(x_n;\theta)

where L(θ)L(\theta) is the likehood function.