β€œν™•λ₯ κ³Ό 톡계(MATH230)” μˆ˜μ—…μ—μ„œ 배운 것과 κ³΅λΆ€ν•œ 것을 μ •λ¦¬ν•œ ν¬μŠ€νŠΈμž…λ‹ˆλ‹€. 전체 ν¬μŠ€νŠΈλŠ” Probability and Statisticsμ—μ„œ ν™•μΈν•˜μ‹€ 수 μžˆμŠ΅λ‹ˆλ‹€ 🎲

8 minute read

β€œν™•λ₯ κ³Ό 톡계(MATH230)” μˆ˜μ—…μ—μ„œ 배운 것과 κ³΅λΆ€ν•œ 것을 μ •λ¦¬ν•œ ν¬μŠ€νŠΈμž…λ‹ˆλ‹€. 전체 ν¬μŠ€νŠΈλŠ” Probability and Statisticsμ—μ„œ ν™•μΈν•˜μ‹€ 수 μžˆμŠ΅λ‹ˆλ‹€ 🎲

Gamma Function

Definition. Gamma Function

The <Gamma Function> $\Gamma(\alpha): (0, \infty) \rightarrow (0, \infty)$ is defined as

\[\Gamma(\alpha) = \int^{\infty}_0 t^{\alpha - 1} e^{-t} dt \quad \text{for} \; \alpha > 0\]

감마 ν•¨μˆ˜λŠ” 1730λ…„ μ˜€μΌλŸ¬κ°€ μ œμ‹œν•œ ν•¨μˆ˜λ‘œ μˆ˜ν•™κ³„μ—μ„œ 정말 유λͺ…ν•œ ν•¨μˆ˜λ‹€. 감마 ν•¨μˆ˜λŠ” μ •μˆ˜μ—μ„œ μ •μ˜λ˜λŠ” <factorial> $n!$을 μ‹€μˆ˜, λ³΅μ†Œμˆ˜ μ˜μ—­κΉŒμ§€ ν™•μž₯ν•˜λŠ” μ‹œλ„μ—μ„œ λ“±μž₯ν•œ ν•¨μˆ˜μ΄λ‹€. 즉, <factorial>의 μΌλ°˜ν™” 버전이라고 μƒκ°ν•˜λ©΄ λœλ‹€.

Remark.

1. base case

\[\begin{aligned} \Gamma(1) &= \int^{\infty}_0 t^{0} e^{-t} dt = \int^{\infty}_0 e^{-t} dt \\ &= \left.- e^{-t} \right]^{\infty}_0 = 1 \end{aligned}\]

2. successive case

\[\begin{aligned} \Gamma(\alpha + 1) &= \int^{\infty}_0 t^{\alpha} e^{-t} dt \\ &= \cancel{\left. t^{\alpha} (- e^{-t}) \right]^{\infty}_0} + \alpha \int^{\infty}_0 t^{\alpha - 1} e^{-t} dt \\ &= \alpha \Gamma(\alpha) \end{aligned}\]

3. factorial

\[\begin{aligned} \Gamma(n) &= (n-1) \cdot \Gamma(n-1) = (n-1)(n-2) \cdot \Gamma(n-2) = \cdots \\ &= \left((n-1)(n-2) \cdots 1\right) \cdot \Gamma(1) \\ &= (n-1)! \end{aligned}\]

4. (special case) normal distribution

\[\begin{aligned} \Gamma(1/2) &= \int^{\infty}_0 t^{-\frac{1}{2}} e^{-t} dt \\ &= \int^{\infty}_0 \frac{e^{-t}}{\sqrt{t}} dt \end{aligned}\]

μ—¬κΈ°μ„œ $t = \dfrac{x^2}{2}$둜 μ„€μ •ν•΄λ³΄μž. 그러면, $dt = x \; dx \iff \dfrac{dt}{\sqrt{2t}} = dx$ μ΄λ―€λ‘œ

\[\begin{aligned} \Gamma(1/2) &= \int^{\infty}_0 \frac{e^{-t}}{\sqrt{t}} dt \\ &= \sqrt{2} \int^{\infty}_0 e^{-\frac{x^2}{2}} dx \\ &= \sqrt{2} \cdot \sqrt{2\pi} \cdot \left( \frac{1}{\sqrt{2\pi}} \int^{\infty}_0 e^{-\frac{x^2}{2}} dx \right) \\ &= \sqrt{2} \cdot \sqrt{2\pi} \cdot 0.5 = \sqrt{\pi} \end{aligned}\]

Gamma Distribution

Definition. Gamma Distribution

Let $\alpha > 0$ and $\beta > 0$. We say that $X$ has a <Gamma Distribution> with a shape parameter $\alpha$ and a scale parameter $\beta$, if its pdf is given by

\[f(x; \alpha, \beta) = \begin{cases} C_{\alpha, \beta} \cdot x^{\alpha-1} e^{-\frac{x}{\beta}} & \text{for } x > 0 \\ \quad 0 & \text{else} \end{cases}\]

μ΄λ•Œ, κ³„μˆ˜ $C_{\alpha, \beta}$λŠ”

\[C_{\alpha, \beta} \cdot \int^{\infty}_0 x^{\alpha - 1} e^{-\frac{x}{\beta}} \; dx = 1\]

이 λ˜λ„λ‘ ν•˜λŠ” $C_{\alpha, \beta}$λ₯Ό μ„ νƒν•œλ‹€. 이것을 잘 μ •λ¦¬ν•˜λ©΄,

\[C_{\alpha, \beta} = \frac{1}{\displaystyle \int^{\infty}_0 x^{\alpha - 1} e^{-\frac{x}{\beta}} \; dx} = \frac{1}{\Gamma(\alpha) \cdot \beta^{\alpha}}\]

(β–² μΉ˜ν™˜μ λΆ„μ„ 잘 μ“°λ©΄ μœ„μ™€ 같은 κ²°κ³Όκ°€ λ‚˜μ˜¨λ‹€ γ…Žγ…Ž)

κ°λ§ˆλΆ„ν¬λ₯Ό λ‹€μ‹œ κΈ°μˆ ν•΄λ³΄λ©΄,

\[\text{Gamma}(x; \alpha, \beta) = \frac{1}{\Gamma(\alpha) \beta^{\alpha}} \cdot x^{\alpha - 1} e^{-\frac{x}{\beta}} \quad \text{for } x > 0\]

Remarks.

1. $\text{Gamma}(1, \beta) \overset{D}{=} \text{EXP}(\beta)$

If $\alpha = 1$, then

\[C_{1, \beta} = \frac{1}{\Gamma(1) \beta} = \frac{1}{\beta}\]

then

\[f(x) = \frac{1}{\beta} e^{-\frac{x}{\beta}}\]

λ”°λΌμ„œ, $\text{Gamma}(1, \beta) \overset{D}{=} \text{EXP}(\beta)$.


2. $\text{Gamma}(n, \beta)$; Generalization of <Exponential Distribution>

If $\alpha = n$, then $X \sim \text{Gamma}(n, \beta)$ can be used for the amount of time that $n$ events occur.

In fact, $X$ can be written as $X = X_1 + \cdots + X_n$ where $X_i$’s are independent $\text{EXP}(\lambda)$ RVs.

예λ₯Ό λ“€μ–΄, 사건이 $3$번 μΌμ–΄λ‚˜κΈ°κΉŒμ§€ κ±Έλ¦° μ‹œκ°„μ— λŒ€ν•œ λΆ„ν¬λŠ” $\text{Gamma}(3, \beta)$λ₯Ό λ”°λ₯Έλ‹€λŠ” 말이닀! κ·Έλž˜μ„œ <Exponential Distribution>에 λŒ€ν•œ μΌλ°˜ν™”λΌκ³  λ³Ό μˆ˜λ„ μžˆλ‹€.


Theorem.

If $X \sim \text{Gamma}(\alpha, \beta)$, then

  • $E[X] = \alpha \beta$
  • $\text{Var}(X) = \alpha \beta^2$

Proof.

\[\begin{aligned} E[X] &= \int^{\infty}_0 x f(x) dx \\ &= C_{\alpha, \beta} \int^{\infty}_0 x^{\alpha} e^{-x/\beta} dx \\ &= C_{\alpha, \beta} \cdot \left( \frac{1}{C_{\alpha+1, \beta}} \cancelto{1}{\int^{\infty}_0 C_{\alpha+1, \beta} \; x^{\alpha} e^{-x/\beta} dx} \right) \\ &= C_{\alpha, \beta} \cdot \frac{1}{C_{\alpha+1, \beta}} \\ &= \frac{1}{\Gamma(\alpha) \beta^{\alpha}} \cdot \Gamma(\alpha+1) \beta^{\alpha+1} \\ &= \frac{\Gamma(\alpha+1)}{\Gamma(\alpha)} \beta \end{aligned}\]

μ΄λ•Œ, 감마 ν•¨μˆ˜μ— λŒ€ν•œ <Remark 2>λ₯Ό μ‚¬μš©ν•˜λ©΄, κ²°κ΅­ $E[X]$λŠ” μ•„λž˜μ™€ κ°™λ‹€.

\[\begin{aligned} E[X] &= \frac{\Gamma(\alpha+1)}{\Gamma(\alpha)} \beta = \alpha \beta \end{aligned}\]

뢄산에 λŒ€ν•œ 증λͺ…μ—μ„œλŠ” $E[X^2]$λ₯Ό ꡬ해야 ν•œλ‹€.

\[\begin{aligned} E[X^2] &= \int^{\infty}_0 x^2 f(x) dx \\ &= C_{\alpha, \beta} \int^{\infty}_0 x^{\alpha + 1} e^{-x/\beta} dx \\ &= C_{\alpha, \beta} \cdot \left( \frac{1}{C_{\alpha+2, \beta}} \cancelto{1}{\int^{\infty}_0 C_{\alpha+2, \beta} \; x^{\alpha + 1} e^{-x/\beta} dx} \right) \\ &= C_{\alpha, \beta} \cdot \frac{1}{C_{\alpha+2, \beta}} \\ &= \frac{\Gamma(\alpha + 2)}{\Gamma(\alpha)} \beta^2 \\ &= \alpha (\alpha - 1) \beta^2 \end{aligned}\]

λ”°λΌμ„œ,

\[\begin{aligned} \text{Var}(X) &= E[X^2] - (E[X])^2 \\ &= (\alpha^2 - \alpha) \beta^2 - \alpha^2 \beta^2 \\ &= \alpha \beta^2 \end{aligned}\]

Relation to Poisson Process

Let $N(t)$ be a <Poisson process> with rate $\lambda$. Let $X$ be the time to the $n$-th event in the <Poisson process>.

Claim: $X \sim \text{Gamma}(n, \beta)$ where $\beta = 1/\lambda$

\[\begin{aligned} P(X > t) = P(N(t) < n) \end{aligned}\]

μ΄λ•Œ, $N(t) \sim \text{POI}(\lambda t)$μ΄λ―€λ‘œ,

\[\begin{aligned} P(N(t) < n) &= \sum^{n-1}_{k=0} P(N(t) = k) \\ &= \sum^{n-1}_{k=0} e^{-\lambda t} \frac{(\lambda t)^k}{k!} \end{aligned}\]

μœ„μ˜ 식을 톡해 $X$의 cdfλ₯Ό μ•Œκ³  μžˆμœΌλ‹ˆ, 이것을 λ―ΈλΆ„ν•΄ $X$의 pdfλ₯Ό μœ λ„ν•΄λ³΄μž.

\[\begin{aligned} \frac{d}{dt} P(X \le t) &= - \frac{d}{dt} P(X > t) \\ &= - \left( \sum^{n-1}_{k=0} (-\lambda) e^{-\lambda t} \frac{(\lambda t)^k}{k!} + \sum^{n-1}_{k=1} \lambda e^{-\lambda t} \frac{(\lambda t)^{(k-1)}}{(k-1)!}\right) \\ &= \lambda e^{-\lambda t} \cdot \left( \sum^{n-1}_{k=0} \frac{(\lambda t)^k}{k!} - \sum^{n-1}_{k=1} \frac{(\lambda t)^{(k-1)}}{(k-1)!} \right) \\ &= \lambda e^{-\lambda t} \frac{(\lambda t)^{(n-1)}}{(n-1)!} \\ &= \frac{\lambda^n}{(n-1)!} \cdot t^{n-1} e^{-\lambda t} \\ &= \frac{\lambda^n}{\Gamma(n)} \cdot t^{n-1} e^{-\lambda t} \\ &= \frac{1}{\Gamma(n) \beta^n} \cdot t^{n-1} e^{-t/\beta} \\ &= C_{n, \beta} \cdot t^{n-1} e^{-t/\beta} \\ &= f(x; n, \beta) \end{aligned}\]

즉, $X$λŠ” $X \sim \text{Gamma}(n, \beta)$이닀. $\blacksquare$


μ΄μ–΄μ§€λŠ” ν¬μŠ€νŠΈμ—μ„œλŠ” 감마 λΆ„ν¬μ˜ νŠΉμˆ˜ν•œ 경우둜 κΌ½νžˆλŠ” <Chi-square distribution>, <Beta distribution>κ³Ό <Log-normal distribution>에 λŒ€ν•΄ 닀룬닀 🀩

πŸ‘‰ Chi-square, Beta and Log-normal Distribution


references