The idea about autoencoders is pretty straight forward. Predict what you input.
-
What is the point then? Well, we know that neural networks (NNs) are just a sequence of matrix multiplications. Let's say the shape of the input matrix is (n, k), which means there are n instances with k features. We want to predict a single output for each of the n instances, that is (n, 1). So we can simply multiply the (n, k) matrix by a (k, 1) matrix to get a (n, 1) matrix. The (n, 1) matrix resulting from this multiplication is then compared with the (n, 1) labels, where the error is used to optimize the (k, 1). But are we really limited to a single (k, 1) matrix? Not at all! We can have much longer sequences, for example:
- Input: (n, k) x (k, 100) x (100, 50) x (50, 20) x (20, 1) ==>
from Planet SciPy
read more
No comments:
Post a Comment