CS代考计算机代写 decision tree matlab python algorithm COMS 4771 SP21 HW2 Due: Mon Feb 22, 2021 at 11:59pm

COMS 4771 SP21 HW2 Due: Mon Feb 22, 2021 at 11:59pm
This homework is to be done alone. No late homeworks are allowed. To receive credit, a type- setted copy of the homework pdf must be uploaded to Gradescope by the due date. You must show your work to receive full credit. Discussing possible solutions for homework questions is encour- aged on piazza and with your peers, but you must write your own individual solutions and not share your written work/code. You must cite all resources (including online material, books, articles, help taken from specific individuals, etc.) you used to complete your work.
1 Cost-sensitive classification
Suppose you have a binary classification problem with input space X = R and output space Y = {0, 1}, where it is c times as bad to commit a “false positive” as it is to commit a “false nega- tive” (for some real number c ≥ 1). To make this concrete, let’s say that if your classifier predicts 1 but the correct label is 0, you incur a penalty of $c; if your classifier predicts 0 but the correct label is 1, you incur a penalty of $1. (And you incur no penalty if your classifier predicts the correct label.)
Assume the distribution you care about has a class prior with π0 = 2/3 and π1 = 1/3, and the class conditional are Gaussians with densities N(0,1) for class 0, and N(2,1/4) for class 1. Let f ∗ : R → {0, 1} be the classifier with the smallest expected penalty.
2
(i)
(ii)
Assume 1 ≤ c ≤ 14. Specify precisely the subset of R in which the classifier f∗ predicts 1. (E.g., [0, 5c] ∪ [6c, +∞).)
Now instead assume c ≥ 15. Again, specify precisely the region in which the classifier f∗ predicts 1.
Making data linearly separable by feature space mapping
Consider the infinite dimensional feature space mapping
Φσ :R→R∞
x 􏰌→ 􏰏 max 􏰑0, 1 − 􏰎 α − x 􏰎􏰒􏰐 .
􏰎 σ 􏰎 α∈R
(It may be helpful to sketch the function f (α) := max{0, 1 − |α|} for understanding the mapping
and answering the questions below)
(i) Show that for any n distinct points x1, . . . , xn, there exists σ > 0 such that the mapping Φσ
can linearly separate any binary labeling of the n points. 1

(ii) Show that one can efficiently compute the dot products in this feature space, by giving an analytical formula for Φσ(x) · Φσ(x′) for arbitrary points x and x′.
3 Learning DNFs with kernel perceptron
Suppose that we have S = {(x(i),y(i))}ni=1 with x(i) ∈ {0,1}d and y(i) ∈ {−1,1}. Let φ : {0, 1}d → {0, 1} be a “target function” which “labels” the points. Additionally assume that φ is a DNF formula (i.e. φ is a disjunction of conjunctions, or a boolean “or” of a bunch of boolean “and”s). The fact that it “labels” the points simply means that 1[y(i) = 1] = φ(x(i)).
For example, let φ(x) = (x1 ∧ x2) ∨ (x1 ∧ x ̄2 ∧ x3) (where xi denotes the ith entry of x), x(i) = 􏰃1 0 1􏰄T, and x(j) = 􏰃1 0 0􏰄T. Then, we would have φ(x(i)) = 1 and φ(x(j)) = 0, and thus y(i) = 1 and y(j) = −1.
(i) Give an example target function φ (make sure its a DNF formula) and set S such that the data is not linearly separable.
Part (i) clearly shows that running the perceptron algorithm on S cannot work in general since the data does not need to be linearly separable. However, we can try to use a feature transformation and the kernel trick to linearize the data and thus run the kernelized version of the perceptron algorithm on these datasets.
Consider the feature transformation φ : {0, 1}d → {0, 1}3d which maps a vector x to the vector of all the conjunctions of its entries or of their negations. So for example if d = 2 then φ(x) =
􏰃1 x1 x2 x ̄1 x ̄2 x1 ∧x2 x1 ∧x ̄2 x ̄1 ∧x2 x ̄1 ∧x ̄2􏰄T (notethat1canbeviewedasthe empty conjunction, i.e. the conjunction of zero literals).
Let K : {0,1}d ×{0,1}d → R be the kernel function associated with φ (i.e. for a,b ∈ {0,1}d : K (a, b) = φ(a) · φ(b)). Note that the naive approach of calculating K (a, b) (simply calculating φ(a) and φ(b) and taking the dot product) takes time Θ(3d).
Also let w∗ ∈ {0, 1}3d be such that w1∗ = −0.5 (this is the entry which corresponds to the empty conjunction, i.e. ∀x ∈ {0, 1}d : φ(x)1 = 1) and ∀i > 1 : wi∗ = 1 iff the ith conjunction is one of the conjunctions of φ. So for example in the above case where d = 2 and φ(x) =
􏰃1 x1 x2 x1 x2 x1 ∧x2 x1 ∧x2 x1 ∧x2 x1 ∧x2􏰄T andlettingφ(x)=(x1 ∧x2)∨ (x1) we would have:
w∗=􏰃−0.5 0 0 1 0 1 0 0 0􏰄T
(ii) Find a way to compute K(a, b) in O(d) time.
(iii) Show that w∗ linearly separates φ(S) (φ(S) is just a shorthand for {(φ(x(i)), y(i))}ni=1) and find a lower bound for the margin γ with which it separates the data. Remember that γ =
min (i) (i) y 􏰏 w∗ · φ(x(i))􏰐. Your lower bound should depend on s, the number (φ(x ),y )∈φ(S) i ∥w∗∥
of conjunctions in φ.
(iv) Find an upper bound on the radius R of the dataset φ(S). Remember that
R = max ∥φ(x(i))∥. (φ(x(i)),y(i))∈φ(S)
2

(v) Use parts (ii), (iii), and (iv) to show that we can run kernel perceptron efficiently on this trans- formed space in which our data is linearly separable (show that each iteration takes O(nd) time only) but that unfortunately the mistake bound is very bad (show that it is O(s2d)).
There are ways to get a better mistake bound in this same kernel space, but the running time then becomes very bad (exponential). It is open whether there are ways to get both polynomial mistake bound and running time.
4 Understanding model complexity and overfitting
Here we will empirically study the tradeoff between model complexity and generalizability using handwritten digits dataset.
Download the datafile digits.mat. This datafile contains 10,000 images (each of size 28×28 pixels = 784 dimensions) of handwritten digits along with the associated labels. Each handwrit- ten digit belongs to one of the 10 possible categories {0, 1, . . . , 9}. There are two variables in this datafile: (i) Variable X is a 10,000×784 data matrix, where each row is a sample image of a hand- written digit. (ii) Variable Y is the 10,000×1 label vector where the ith entry indicates the label of the ith sample image in X.
Special note for those who are not using Matlab: Python users can use scipy to read in the mat file, R users can use R.matlab package to read in the mat file, Julia users can use JuliaIO/MAT.jl. Octave users should be able to load the file directly.
To visualize this data (in Matlab): say you want to see the actual handwritten character image of the 77th datasample. You may run the following code (after the data has been loaded):
figure;
imagesc(1-reshape(X(77,:),[28 28])’);
colormap gray;
To see the associated label value:
Y(77)
(i) Build a decision tree classifier for the handwritten digit dataset. In building your decision tree, you may use any reasonable uncertainty measure to determine the feature and threshold to split at in each cell. Make sure the depth of the tree is adjustable with hyperparameter K.
You must submit your code to receive full credit.
(ii) Ensure that there is a random split between training and test data. Plot the training error and
test error as a function of K.
(iii) Do the trends change for different random splits of training and test data?
(iv) How do you explain the difference in the behavior of training and testing error as a function of K?
(v) Based on your analysis, what is a good setting of K if you were deploy your decision tree classifier to classify handwritten digits?
3

Leave a Reply

Your email address will not be published. Required fields are marked *