$24
Problem 1: The Curse of Dimensionality
In this problem, we will study a high-dimensional statistical phenomena called the Curse of Dimen-sionality. Originally coined by Bellman in the context of dynamic programming, it generally refers to the failure of certain statistical or algorithmic procedures to scale efficiently as the input dimension grows.
We will first study this curse through the geometry of the unit d-dimensional ‘2 sphere. Recall that the ‘p norm of a vector x 2 R d is defined as kxkp :˘ (P i jxijp)1/p, for p 2 [1,1). To get a first grasp on the phenomena, we will first consider a ‘Gaussian’ approximation of the d-dimensional unit sphere, by considering a Gaussian random vector X » N (0, I/d), where I is the d £ d identity matrix.
p
1. Using the Central Limit Theorem, show that kX k2 ˘ 1 ¯ O(1/ d) with high probability. In other words, show that kX k2 is a random variable with expectation 1 and standard deviation
p
proportional to 1/ d. [Alternative: use the ´-squared distribution with d degrees of freedom]. (1 point)
2. Numerically verify this property by simulating kX k2 in dimensions d 2 {10,100,1000,10000} using a sample size of n ˘ 1000. (2 points)
This means that for large input dimension d, a draw X from the N (0, I/d) Gaussian distribution will concentrate to the unit sphere kX k2 … 1. Let us also verify that the Gaussian distribution is rotationally invariant:
3. Let R 2 Rd£d be any unitary matrix, ie R>R ˘ RR> ˘ I. Show that the pdf of X » N (0, I/d) and
˜ ˘ are the same, ie they do not depend on the choice of . (2 points)
X R X R
For our purposes, we will thus use Gaussian N (0, I/d) samples as de facto draws from the unit d-dimensional sphere.
4. If we draw two datapoints X , X 0
p
i.i.d. from N (0, I/d), show that jhX , X 0ij ˘ O(1/ d) with high
probability using again the CLT. In other words, show that hX , X 0i is a random variable of p
zero mean and standard deviation proportional to 1/ d. Conclude that for a constant C ¨ 0,
p
p
kX ¡ X 0k 2 ( 2 § C/
d) for large d with high probability. (2 points)
This property reflects the intuition that independent draws from a high-dimensional Gaussian distri-bution (or any rotationally-invariant distribution, more generally) are nearly orthogonal as d ! 1, since jhX , X 0ij will have variance going to 0 as d ! 1.
We will now combine this intuition with a simple supervised learning setup. Assume a target function f ⁄ : Rd ! R is fl-Lipschitz, ie jf ⁄(x) ¡ f ⁄(x0)j É flkx ¡ x0k, and a dataset {(xi, yi)}i... n with
xi » N (0, I/d) and yi ˘ f ⁄(xi) drawn independently. We will consider the Nearest-Neighbor estimator
fˆNN given by
fˆNN(x) :˘ yi(x) , where i(x) ˘ arg minkx ¡ xik .
i
5. Show that EjfˆNN(x) ¡ f ⁄(x)j É flEmini kx ¡ xik, where the expectation is taken over the test sample x and the training sample {xi}. (2 points)
6. Let En denote the expectation of mini˘1...n Yi, where Yi » N (0,1) i.i.d. Show that if now Xi » N („,¾2) i.i.d., then Emini Xi ˘ „ ¯¾En. (1 point)
7. Using the fact that kx ¡ xik and kx ¡ xjk are conditionally independent, given x, and assuming pd
the asymptotic Gaussianity of kX ¡X 0k ! N ( 2, C/d) for a constant C ¨ 0, show that Emin kx¡
p
p
i
C
En. (1 point)
2 ¯
xik »
p
d
p
8. Using the fact that En … ¡ 2 log n, conclude that as long as log n ¿ d, the generalisation bound in (5) is vacuous (in other words, unless the sample size is exponential in dimension, the bound does not provide any useful learning guarantee). (2 points)
While this argument shows that our Lipschitz-based upper bound is cursed by dimension, let us conclude this exercise by showing that the exponential dependency in dimension is necessary. For simplicity, we will replace the d-dimensional ‘2 sphere by the ‘1 sphere, ie the cube B ˘ [¡1,1]d. Let › ˘ [¡1/2,1/2] d denote a smaller cube. For x 2 ›, let “(x) ˘ dist(x,@›). In words, “(x) is the distance from x to the boundary of the cube ›.
9. Show that “ is 1-Lipschitz. [Hint: Use the triangular inequality, and the definition of “ as “(x) ˘ miny2@› kx ¡ yk. Also, drawing “ in two-dimensions might help!] (2 points)
We will now use this 1-Lipschitz function supported in the ‘small’ cube › to construct a hard-to-learn function f ⁄ defined in the large cube B ˘ [¡1,1]d.
10. Verify that we can fit 2d copies of › into B, by translating copies of › appropriately [Hint: Use the fact that both › and B are separable in the standard basis. Also, drawing this in two-dimensions should help]. (2 points)
Let now g 2 {§1}2d be a binary string of length 2d, that we index using d binary variables z1 ˘ §1, . . . zd ˘ §1. We define
f ⁄(x) ˘ X g(z)“(x ¡ z/2) . (0.1)
z˘z1˘§1,...zd ˘§1
In words, f ⁄ is constructed by tiling 2d shifted versions of the window “, and flipping the sign of each tile with the bit g(z). From part 9, the support of f ⁄ is the ‘large’ cube B.
11. Verify that f ⁄ is 1-Lipschitz. [Hint: Given two points x, x0 2 B, consider the line segment joining them, and let yk be the intersections with boundaries of tiled ›, treating each resulting segment separately.] (1 point)
12. For d ˘ 2, draw an instance of f ⁄ (You can use either manual drawing or a drawing software). (2 points)
13. Finally, show that if n, the number of training samples, satisfies n É 2d¡1, then the generalisa-tion error of any learning algorithm producing fˆ will be such that
Ex»Unif([¡1,1]d )jf ⁄(x) ¡ fˆ(x)j
Ex»Unif([¡1,1]d )jf ⁄(x)j
In words, the relative generalisation error won’t go to zero unless n & 2d. [Hint: Argue in terms of the tiling you have constructed in question 9; what happens if no datapoint intersects a given tile?] (2 points)
14. Choosing d 2 [5,13], implement this experiment, using any predictor for f ⁄ you want (e.g. a Neural Net), and the Mean-Squared Error (MSE) loss. Verify that the required sample size n before your model starts generalising grows with d exponentially. For each d, draw two large datasets {xi}i˘1...n, {x˜i}i˘1...n with xi, x˜i » Unif([¡1,1]d), then draw K ˘ 10 different target functions fk⁄, k ˘ 1. . . K by picking random bits in equation (0.1), and fit your model to the training data {(xi, yi ˘ fk⁄(xi)}i˘1...n. Then estimate your relative generalisation error using the test set {(x˜i, y˜i ˘ fk⁄(x˜i)} (MSE error divided by standard deviation of the target function on test set), and average the performance across the K runs. You can pick n 2 {2j; j ˘ 5. . .16}. (3 points)
4