Neural networks #
Manifold assumption #
Neural networks are able to overcome the “curse of dimensionality” essentially by discovering an embedding of high-dimensional data into a manifold with a significantly lower dimension.
Links and resources #
- This very nice guide to convolutional neural networks, including lots of nice intuition of what convolution is and examples with image processing.
- This post on my personal website about a universal approximation theorem for single-layer neural networks.
- A nice post by Justin Meiners about when optimisers can outperform neural networks.
- A nice notebook with lots of interesting information about neural networks and transformers (which power large language models).
- An animation explanation how convolution works in neural networks.