2021-1ํ•™๊ธฐ, ๋Œ€ํ•™์—์„œ โ€˜ํ†ต๊ณ„์  ๋ฐ์ดํ„ฐ๋งˆ์ด๋‹โ€™ ์ˆ˜์—…์„ ๋“ฃ๊ณ  ๊ณต๋ถ€ํ•œ ๋ฐ”๋ฅผ ์ •๋ฆฌํ•œ ๊ธ€์ž…๋‹ˆ๋‹ค. ์ง€์ ์€ ์–ธ์ œ๋‚˜ ํ™˜์˜์ž…๋‹ˆ๋‹ค :)

11 minute read

2021-1ํ•™๊ธฐ, ๋Œ€ํ•™์—์„œ โ€˜ํ†ต๊ณ„์  ๋ฐ์ดํ„ฐ๋งˆ์ด๋‹โ€™ ์ˆ˜์—…์„ ๋“ฃ๊ณ  ๊ณต๋ถ€ํ•œ ๋ฐ”๋ฅผ ์ •๋ฆฌํ•œ ๊ธ€์ž…๋‹ˆ๋‹ค. ์ง€์ ์€ ์–ธ์ œ๋‚˜ ํ™˜์˜์ž…๋‹ˆ๋‹ค :)

Set-up

์–ด๋–ค ํ–‰๋ ฌ $A$์— ๋Œ€ํ•ด ๊ทธ ํ–‰๋ ฌ์˜ ์ œ๊ณฑ๊ทผ ํ–‰๋ ฌ์„ ์ฐพ์„ ์ˆ˜ ์žˆ์„๊นŒ? ๊ทธ๋Ÿฌ๋‹ˆ๊นŒ ์–ด๋–ค ํ–‰๋ ฌ $B$๊ฐ€ ์žˆ์–ด $B B = A$๊ฐ€ ๋˜๋Š” ๊ทธ๋Ÿฐ ํ–‰๋ ฌ์„ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š”์ง€์— ๋Œ€ํ•œ ์งˆ๋ฌธ์ด๋‹ค.

์šฐ์„  ํ–‰๋ ฌ์ด ์ œ๊ณฑ๊ทผ์„ ๊ฐ€์ง€๋Š” ์ข‹์€ ์„ฑ์งˆ์„ ๊ฐ€์ง€๋ ค๋ฉด, ํ–‰๋ ฌ $A$๊ฐ€ symmetric matrix๊ฐ€ ๋˜์–ด์•ผ ํ•  ๊ฒƒ์ด๋‹ค; $A \in \mathbb{R}^{n\times n}$.

์‹ค์ˆ˜ ์˜์—ญ์—์„œ๋Š” ์–ด๋–ค ์ˆ˜ $x$๊ฐ€ ์ œ๊ณฑ๊ทผ์„ ๊ฐ€์ง€๋ ค๋ฉด, ๊ทธ ์ˆ˜๊ฐ€ $x \ge 0$ ์—ฌ์•ผ ํ–ˆ๋‹ค. (๋ณต์†Œ ์ œ๊ณฑ๊ทผ๋„ ์žˆ์ง€๋งŒ, ์—ฌ๊ธฐ์„œ๋Š” ์‹ค์ˆ˜ ์˜์—ญ ์œ„์— ์ •์˜๋œ ์ œ๊ณฑ๊ทผ ์—ฐ์‚ฐ์„ ์ƒ๊ฐํ•˜์ž.) ์ฆ‰, ์ œ๊ณฑ๊ทผ์„ ์ •์˜ํ•˜๊ธฐ ์œ„ํ•ด์„  non-negativeโ€์— ๋Œ€ํ•œ ๊ฐœ๋…์„ ํ–‰๋ ฌ์˜ ์˜์—ญ์œผ๋กœ ๋Œ์–ด์˜ฌ๋ ค์•ผ ํ•œ๋‹ค. ๊ทธ๋Ÿฐ ์ ์—์„œ ์ด๋ฒˆ์— ์‚ดํŽด๋ณผ <Nonnegative Definite>๋Š” โ€œnon-negativeโ€๋ฅผ ํ™•์žฅํ•œ ๊ฒƒ์ด๋ผ๊ณ  ์ƒ๊ฐํ•˜๋ฉด ์ข‹๋‹ค.

Nonnegative Definite Matrices

Theorem.

For a symmetric matrix $A \in \mathbb{R}^{n \times n}$, T.F.A.E.

(1) $A$ is <non-negative definite>, denoted $A \succeq 0$:

\[\mathbf{x}^T A \mathbf{x} \ge 0 \quad \text{for every} \quad \mathbf{x} \in \mathbb{R}^{n}\]

(2) All eigenvalues of $A$ are non-negative.

(3) $A = B^T B$ for some $B$.

(4) $A$ is variance-covariance matrix of some randome variable.


<Nonnegative Definite>์˜ ์˜๋ฏธ๋ฅผ ์ข€๋” ์‚ดํŽด๋ณด์ž. $A$๊ฐ€ symmetric matrix์ด๋ฏ€๋กœ <spectral decomposition>์ด ๊ฐ€๋Šฅํ•˜๋‹ค; $A = UDU^T$

\[\begin{aligned} A = UDU^T = d_1 \mathbf{u}_1 \mathbf{u}_1^T + \cdots + d_n \mathbf{u}_n \mathbf{u}_n^T \end{aligned}\]

์—ฌ๊ธฐ์— ์ขŒ์šฐ์— ๋ฒกํ„ฐ $\mathbf{x}$๋ฅผ ๊ณฑํ•ด์ฃผ์ž.

\[\begin{aligned} \mathbf{x}^T A \mathbf{x} &= \mathbf{x}^T \left( d_1 \mathbf{u}_1 \mathbf{u}_1^T + \cdots + d_n \mathbf{u}_n \mathbf{u}_n^T \right) \mathbf{x} \\ &= d_1 (\mathbf{u}_1^T \mathbf{x})^2 + \cdots + d_n (\mathbf{u}_n^T \mathbf{x})^2 \quad (\because \mathbf{x}^T \mathbf{u}_i {\mathbf{u}_i}^T \mathbf{x} = \mathbf{x}^T \mathbf{u}_i \cdot {\mathbf{u}_i}^T \mathbf{x} ) \end{aligned}\]

๋งŒ์•ฝ $\mathbf{x}^T A \mathbf{x} \ge 0$์ด ๋˜๊ธฐ ์œ„ํ•ด์„  ์–ด๋–ค ์กฐ๊ฑด์ด ํ•„์š”ํ• ๊นŒ? ์šฐ์„  ์‹์˜ ์šฐ๋ณ€์—์„œ $(\mathbf{u}_1^T \mathbf{x})^2 \ge 0$์ด ์žˆ๋‹ค๋Š” ๊ฒƒ์— ์ฃผ๋ชฉํ•˜์ž. ๋”ฐ๋ผ์„œ, ๋งŒ์•ฝ ๋ชจ๋“  $d_i$๊ฐ€ non-negative๋ผ๋ฉด, ์šฐ๋ณ€์€ ๋‹น์—ฐํžˆ non-negative๊ฐ€ ๋  ๊ฒƒ์ด๋‹ค! ์ด๊ฒƒ์œผ๋กœ ($\impliedby$) ๋ฐฉํ–ฅ์„ ์ฆ๋ช…ํ–ˆ๋‹ค.

($\implies$) ๋ฐฉํ–ฅ์„ ์ฆ๋ช…ํ•˜๋ ค๋ฉด, $\mathbf{x} = \mathbf{u}_i$๋กœ ์„ค์ •ํ•ด๋ณด๋ฉด ๋œ๋‹ค.

\[{\mathbf{u}_i}^T A \mathbf{u}_i = d_i ({\mathbf{u}_i}^T \mathbf{u}_i)^2 \ge 0\]

์œ„์˜ ๋ถ€๋“ฑ์‹์„ ๋งŒ์กฑํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” $d_i \ge 0$์ด ๋˜์–ด์•ผ ํ•œ๋‹ค.

$\blacksquare$


Positive Definite Matrices

Theorem.

For a symmetric matrix $A \in \mathbb{R}^{n \times n}$, T.F.A.E.

(1) $A$ is <positive definite>, denoted $A \succ 0$:

\[\mathbf{x}^T A \mathbf{x} > 0 \quad \text{for every} \quad \mathbf{x} \in \mathbb{R}^{n} \setminus \{\mathbf{0}\}\]

1

(2) All eigenvalues of $A$ are positive.

(3) $A = B^T B$ for some non-singular $B$.

(4) $A$ is non-negative definite and non-singular.

(5) There exist linearly independent vectors $\mathbf{u}_1, \cdots, \mathbf{u}_n \in \mathbb{R}^n$ s.t. \(\displaystyle A = \sum^n_{j=1} {\mathbf{u}_j}^T \mathbf{u}_j\)


(4)๋ฒˆ ๋ช…์ œ๋ฅผ ์ข€๋” ์‚ดํŽด๋ณด์ž. ํ–‰๋ ฌ $A$๊ฐ€ SDP(symmetric and positive definite)๋ผ๋ฉด, ์šฐ๋ฆฌ๊ฐ€ ์–ด๋–ป๊ฒŒ $A^{1/2}$๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์„๊นŒ? ๋‹ต์€ ์—ญ์‹œ <Spectral Decomposition>์— ์žˆ๋‹ค!!

<Spectral Decomposition>์— ์˜ํ•ด ํ–‰๋ ฌ $A$๋Š” ์•„๋ž˜์™€ ๊ฐ™์ด ๋ถ„ํ•ด๋œ๋‹ค.

\[A = UDU^T\]

๋งŒ์•ฝ ์ด๋•Œ $A^2$์„ ๊ตฌํ•œ๋‹ค๋ฉด,

\[A^2 = A \cdot A = (UDU^T) (UDU^T) = UD^2 D^T\]

์ฆ‰, orthogonal matrix $U$๋ผ๋Š” ์ข‹์€ ๋…€์„์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์šฐ๋ฆฌ๋Š” ํ–‰๋ ฌ $A$์— ๋Œ€ํ•œ ๋ฉฑ์—ฐ์‚ฐ(power operation)์„ ์‰ฝ๊ฒŒ ํ•  ์ˆ˜ ์žˆ๋‹ค!!

์ด ๊ฐ™์€ ์•„์ด๋””์–ด๋กœ $A^{1/2}$๋ฅผ ์œ ๋„ํ•ด๋ณด์ž. ๊ฐ„๋‹จํ•˜๊ฒŒ ์ถ”๋ก ํ•˜๋ฉด ์•„๋ž˜์™€ ๊ฐ™์ด ๋˜์ง€ ์•Š์„๊นŒ?

\[A^{1/2} = UD^{1/2}U^T\]

์ •๋‹ต์ด๋‹ค! ๋งˆ์ฐฌ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ์Œ์ˆ˜์— ๋Œ€ํ•œ ๋ฉฑ์—ฐ์‚ฐ๋„ ์‰ฝ๊ฒŒ ์ •์˜ํ•  ์ˆ˜ ์žˆ๋‹ค.

\[A^{-1} = UD^{-1}U^T\]

Convex Function

Positive definite matrix $A$๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด, <convex function>2 ํ•˜๋‚˜๋ฅผ ์œ ๋„ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋จผ์ € <convex function>์˜ ์ •์˜๋ถ€ํ„ฐ ์‚ดํŽด๋ณด์ž.

Definition.

A function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is called to be <convex> if

\[f(\lambda \mathbf{x} + (1-\lambda)\mathbf{y}) \le \lambda f(\mathbf{x}) + (1-\lambda) f(\mathbf{y})\]

for every $\mathbf{x}, \mathbf{y} \in \mathbb{R}^n$ and $0\le \lambda \le 1$.

์‹์„ ์ž˜ ์‚ดํŽด๋ณด๋ฉด, $\lambda \mathbf{x} + (1-\lambda)\mathbf{y}$๋Š” $\mathbf{x}$, $\mathbf{y}$ ์‚ฌ์ด์˜ ๋‚ด๋ถ„์ ์ด๋‹ค. ๋˜ํ•œ, $\lambda f(\mathbf{x}) + (1-\lambda) f(\mathbf{y})$๋Š” ๋‘ ์  $\mathbf{x}$, $\mathbf{y}$๋ฅผ ์ž‡๋Š” ์ง์„  ์œ„์˜ ํ•œ ์ ์ด๋‹ค.

์ฆ‰, ๋ถ€๋“ฑ์‹์ด ์˜๋ฏธํ•˜๋Š” ๋ฐ”๋Š” ๋‚ด๋ถ„์ ์—์„œ์˜ ํ•จ์ˆ˜๊ฐ’์€ ์ง์„  ์œ„์˜ ๊ฐ’๋ณด๋‹ค ํ•ญ์ƒ ์ž‘๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค!


<convex>์— ๋Œ€ํ•œ ์ •๋ฆฌ๋ฅผ ์‚ดํŽด๋ณด์ž.

Theorem.

Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is twice continuously differentiable, if $f$ is convex if and only if

\[\frac{\partial^2 f}{\partial \mathbf{x} \partial \mathbf{x}^T} \succeq 0\]

์ฆ‰, ๋‘ ๋ฒˆ ๋ฏธ๋ถ„ํ•œ ๊ฒƒ์ด ํ•ญ์ƒ 0๋ณด๋‹ค ํฌ๋‹ค๋ฉด, convex function์ด๋ผ๊ณ  ์ •์˜ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๋ง์ด๋‹ค! 2์ฐจ์›์˜ $f(x) = ax^2 + bx + c$์—์„œ๋Š” $f''(x) = a \ge 0$ ์ž„์„ ๋– ์˜ฌ๋ฆฌ๋ฉด ์ข€ ์™€๋‹ฟ์„ ๊ฒƒ์ด๋‹ค.


์ด๋ฒˆ์—๋Š” $A \succeq 0$ ์ธ ํ–‰๋ ฌ์„ ๋ฐ”ํƒ•์œผ๋กœ ์–ด๋–ค convex function์„ ์œ ๋„ํ•ด๋ณด์ž.

Example.

A quadratic function $f(\mathbf{x}) = \mathbf{x}^T A \mathbf{x} + \mathbf{a}^T \mathbf{x}$ is convex if and only if $A \succeq 0$.

์œ„์™€ ๊ฐ™์ด ํ•จ์ˆ˜ $f(x)$๋ฅผ ์ •์˜ํ•˜๋ฉด, ๋‘ ๋ฒˆ ๋ฏธ๋ถ„ํ–ˆ์„ ๋•Œ,

\[\frac{\partial^2 f}{\partial \mathbf{x} \partial \mathbf{x}^T} = A \succeq 0\]

๊ฐ€ ๋˜๊ธฐ ๋•Œ๋ฌธ์— convex function์ด ๋œ๋‹ค. Quadratic form์—์„œ convex์ธ ์„ฑ์งˆ์€ ์ •๋ง ์ค‘์š”ํ•œ๋ฐ, Quadratic form์ด convex๊ฐ€ ๋˜์–ด์•ผ max/min์„ ๋…ผํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค!


Orthogonal Projection

Definition.

For a subspace $\mathcal{L} \subseteq \mathbb{R}^n$, the <orthogonal complement> of $\mathcal{L}$ is defined as

\[\mathcal{L}^{\perp} = \{ \mathbf{x} \in \mathbb{R}^n : \mathbf{x}^T \mathbf{y} = 0 \quad \text{for all} \quad \mathbf{y} \in \mathcal{L} \}\]

ํ‰์†Œ์— ์ƒ๊ฐํ•˜๋˜ orthogonal์— ๋Œ€ํ•œ ๊ฐœ๋…์„ vector space์— ์ ์šฉํ•œ ๊ฒƒ์ด orthogonal complement๋‹ค.

Theorem.

Each $\mathbf{x} \in \mathbb{R}^n$ can be uniquely represented as

\[\mathbf{x} = \mathbf{x}_{\mathcal{L}} + \mathbf{x}_{\mathcal{L^{\perp}}}\]

where \(\mathbf{x}_{\mathcal{L}} \in \mathcal{L}\) and \(\mathbf{x}_{\mathcal{L^{\perp}}} \in \mathcal{L}^{\perp}\).

์—ฌ๊ธฐ์„œ ์šฐ๋ฆฌ๋Š” ๋ฒกํ„ฐ $\mathbf{x}_{\mathcal{L}}$๋ฅผ $\mathbf{x}$์˜ $\mathcal{L}$๋กœ์˜ <orthogonal projection>์ด๋ผ๊ณ  ํ•œ๋‹ค.

๊ทธ๋ฆฌ๊ณ  ์ด <orthogonal projection>์€ Linear mapping์ด๋‹ค. ๊ทธ๋ž˜์„œ ํ–‰๋ ฌ์˜ ํ˜•ํƒœ๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค!!

The map $\mathbf{x} \mapsto \mathbf{x}_{\mathcal{L}}$ is a linear mapping.

Theorem.

\[\| \mathbf{x} - \mathbf{x}_{\mathcal{L}}\| \le \| \mathbf{x} - \mathbf{y} \| \quad \text{for every} \quad \mathbf{y} \in \mathcal{L}\]

์œ„ ๋ถ€๋“ฑ์‹์˜ ์˜๋ฏธ๋Š” $\mathcal{L}$ ์œ„์˜ ๋ฒกํ„ฐ์™€ $\mathbf{x}$ ์‚ฌ์ด์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์žด ๋•Œ, orthogonal proj. $\mathbf{x}_{\mathcal{L}}$์ด ๊ฐ€์žฅ ์งง์€ ๊ฑฐ๋ฆฌ๋ฅผ ๋ฑ‰์Œ์„ ๋งํ•œ๋‹ค. ๊ทธ๋ฆผ์œผ๋กœ ํ™•์ธํ•˜๋ฉด ์•„๋ž˜์™€ ๊ฐ™๋‹ค.

Definition. idempotent or projection

$A \in \mathbb{R}^{n\times n}$ is called an <idempotent> or <projection> matrix if $A^2 = A$.

Theorem.

T.F.A.E.

(1) $A\mathbf{x}$ is the orthogonal projection of $\mathbf{x}$ onto $\mathcal{C}(A)$.

์ด ๋ช…์ œ๋Š” $\mathbf{x}$์— $A$๋ฅผ ๊ณฑํ•˜๋Š” ์—ฐ์‚ฐ(๋งคํ•‘) ์ž์ฒด๊ฐ€ $\{A\mathbf{x} : \mathbf{x} \in \mathbb{R}^n \}$์ธ ์ง‘ํ•ฉ์„ ์œ ๋„ํ•˜๋Š”๋ฐ, ์ด ์ง‘ํ•ฉ์ด ๋ฐ”๋กœ $\mathcal{C}(A)$์ด๋‹ค.

(2) $A$ is a projection and $\mathcal{N}(A) \perp \mathcal{C}(A)$.

์ฆ‰, for every $\mathbf{x} \in \mathcal{N}(A)$, $\mathbf{y} \in \mathcal{C}(A)$, $\mathbf{x}^T \mathbf{y} = 0$.

(3) $A$ is symmetric and idempotent.

๊ทธ๋ž˜์„œ ๋งŒ์•ฝ ์œ„์˜ ๋ช…์ œ ์ค‘ ํ•˜๋‚˜๋ผ๋„ ์„ฑ๋ฆฝํ•œ๋‹ค๋ฉด, $A$๋Š” <orthogonal projection matrix> onto $\mathcal{C}(A)$๊ฐ€ ๋œ๋‹ค.


Theorem.

Let $A \in \mathbb{R}^{n\times n}$ and symmetric. Then, T.F.A.E.

(1) $A^2 = A$

(2) All eigenvalues of $A$ are either 0 or 1.

(3) $\text{rank}(A) + \text{rank}(I_n - A) = n$

((1)$\implies$(2))๋Š” ์‰ฝ๊ฒŒ <spectral decomposition>์„ ํ™œ์šฉํ•˜๋ฉด, ์‰ฝ๊ฒŒ ์ฆ๋ช…ํ•  ์ˆ˜ ์žˆ๋‹ค.

Because $A$ is symmetric, $A = UDU^T$ by spectral theorem.

By statement (1), $A^2 = A$

\[A^2 = (UDU^T)(UDU^T) = UD^2U^T = UDU^T\]

๋”ฐ๋ผ์„œ, $D^2 = D$. ์ด๊ฒƒ์„ ๋งŒ์กฑํ•˜๋ ค๋ฉด, $d_i^2 = d_i$๊ฐ€ ๋˜์–ด์•ผ ํ•œ๋‹ค. ์ด๊ฒƒ์€ $d_i = 0$ or $d_i = 1$์ผ ๋•Œ๋งŒ ๊ฐ€๋Šฅํ•˜๋‹ค. $\blacksquare$

eigenvalue $d_i$๊ฐ€ 0 or 1์ด๋ผ๋Š” ์‚ฌ์‹ค์€ proj. $A$๊ฐ€ $d_i = 1$์ธ ํŠน์ • $u_i$ ๋ฒกํ„ฐ๋งŒ ์‚ด๋ฆฌ๊ฒŒ ํ•˜๋Š” ์—ฐ์‚ฐ์ž„์„ ์•Œ ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค€๋‹ค.

((2)$\implies$(3))๋„ ์ฆ๋ช…ํ•ด๋ณด์ž. ์ด๊ฑด rank๊ณผ eigenvalue ์‚ฌ์ด์˜ ๊ด€๊ณ„๋ฅผ ํ†ตํ•ด ์‰ฝ๊ฒŒ ์ฆ๋ช…ํ•  ์ˆ˜ ์žˆ๋‹ค.

rank๋Š” (# of non-zero eigenvalue)๋กœ ์ •์˜๋œ๋‹ค. orthognoal proj์ธ $A$๋Š” eigvenvalue๊ฐ€ 0 ๋˜๋Š” 1์ด๋ฏ€๋กœ $d_i = 1$์˜ ๊ฐฏ์ˆ˜๋ฅผ ์„ธ๋ฉด ๋œ๋‹ค.

$I_n - A$๋ฅผ ์‚ดํŽด๋ณด์ž. ์ด๊ฑด $A$์˜ $d_i$์˜ ๊ฐ’์„ ํ† ๊ธ€์‹œ์ผœ์ค€๋‹ค. ๋”ฐ๋ผ์„œ, $I_n - A$์˜ rank๋Š” $A$์˜ ๊ฒƒ๊ณผ complementํ•˜๊ฒŒ ๋œ๋‹ค. $\text{rank}(I_n - A) = n - \text{rank}(A)$. $\blacksquare$


๋“œ๋””์–ด ๋งˆ์ง€๋ง‰ Theorem์ด๋‹ค. ํ•˜์ง€๋งŒ, ์•„๋ž˜์˜ ๋ช…์ œ๋Š” ์ด <ํ†ต๊ณ„์  ๋ฐ์ดํ„ฐ๋งˆ์ด๋‹>์ด๋ผ๋Š” ๊ณผ๋ชฉ์—์„œ <Regression>์„ ๋‹ค๋ฃฐ ๋•Œ ์ •๋ง์ •๋ง ๋งŽ์ด ์“ฐ๊ฒŒ ๋˜๋Š” ์ •๋ฆฌ์ด๋ฏ€๋กœ, ์ •๋ง ์ค‘์š”ํ•˜๋‹ค! ๐Ÿ”ฅ

Theorem.

Let $X = (\mathbf{x}_1, \dots, \mathbf{x}_p)$ be an $n\times p$ matrix with $\text{rank}(X) = p$3 and

\[H = X(X^TX)^{-1}X^T\]

Then, $H$ is the orthogonal projection onto $C(X)$, that is

(1) $H^2 = H$

(2) $\mathcal(H) \perp \mathcal{N}(H)$

(3) $\mathcal{C}(H) = \mathcal{C}(X)$

์ด๋•Œ, $X$๋กœ๋ถ€ํ„ฐ ์œ ๋„ํ•œ matrix $H$๋ฅผ <hat matrix>๋ผ๊ณ  ํ•œ๋‹ค.


  1. ์ฐธ๊ณ ๋กœ ์˜๋ฒกํ„ฐ $\mathbf{0}$์„ ๋นผ๋Š” ์ด์œ ๋Š” ์˜๋ฒกํ„ฐ๋ฅผ ๊ณฑํ•˜๋ฉด $\mathbf{x}^T A \mathbf{x} = 0$์ด ๋˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค.ย 

  2. โ€œconvexโ€๋Š” ๋ณผ๋กํ•œ, โ€œconcaveโ€๋Š” ์˜ค๋ชฉํ•œ์„ ์˜๋ฏธํ•œ๋‹ค.ย 

  3. $X$์˜ ๋ชจ๋“  Column $\mathbf{x}_i$๊ฐ€ ์„œ๋กœ linearly independent ํ•˜๋‹ค๋Š” ๋ง์ด๋‹ค.ย