Learning on Low-Dimensional Image Data (1)

Posted on Sat 18 April 2020 in learning

Introduction

Many problems in machine learning are difficult to interpret because the high-dimensional state spaces elude intuition. Certain specific traits of these spaces - often referred to as the curse of dimensionality - influence distribution of objects, metric properties and other aspects in ways, making it difficult to apply geometric imagination and methods - leave alone differential (smooth) ones - to the manifolds expressing the decision regions of - in particular classification - problems in these spaces.

In this article we will restrict ourselves to two- or three-dimensional state spaces in order to learn from these much more accessible objects.

Decision Regions in Toy Spaces

How many classification problems can an image of two pixels create?

Let NN define the number of possible pixel values of a general grayscale image and kk his number of pixels. The totality of all these images forms a kk-dimensional state space and is called the Classification Toy Space of dimension kk or Tk\bm{T_k}.

Consider the following problems:

  • Binary classification (k=2)(k=2):

    For N=256N = 256 and k=2k = 2, Nk=65536N^k = 65536 possible 22-pixel images exist. T2T_2 contains 2N22^{N^2} different subsets 1 2 and every such subset UU together with his complement U\overline{U} can be interpreted as decision regions of a particular binary classification problem. Obviously, the two classification problems (U,U)(U,\overline{U}) and (U,U)(\overline{U}, U) are equivalent. After removing the trivial cases (T2,),(,T2)(T_2,\emptyset),(\emptyset,T_2) and considering equivalent partitions the same we have finally 2N2112^{N^2-1}-1 non-trivial binary problems. Each of them can be represented as a single N×NN \times N image with two colors, marking the complementary decision regions. Note that the latter form a binary partition of T2T_2.

  • Classification with kk classes (k2)(k \geq 2):

    The Stirling Number is the number of partitions of a set of nn elements by kk non-empty sets: S(n,k)={nk}=1k!j=0k(1)j(kj)(kj)n S(n,k) = \bigg\{{n\atop k}\bigg\} = \frac{1}{k!}\sum_{j=0}^k (-1)^j\binom{k}{j}(k-j)^n For k=2k=2 and n=N2n=N^2 we get the previous case. In general, the number of classification problems with kk categories in TkT_k is S(Nk,k)S(N^k,k). In the case k=3,Tkk=3, \: T_k is a cube.

Regression Toy Spaces

In order to consider regression problems, we will add an additional dimension to every element of classification toy spaces. This dimension contains values in [0,1] that should in general be interpreted as probabilities before the decision step of a classification problem. The corresponding space is called Regression Toy Space or Θp\bm{{\Theta}_p}. In case, the context is unambiguous, we simply use the term Toy Space.

Examples

2-classification 10-classification
Spiral-shaped regions for a binary classificator (k=2)(k=2) Decision regions for classification problem with multiple categories (k=10)(k=10)

To be continued...


  1. including the empty set 

  2. because the power set of a set of MM elements has itself 2M2^{M} elements