This post looks at some key ideas for artificial intelligence systems. It acts as a guide to the landmarks on our path to improved computing.
Intelligence, Representations & Generative Functions
When you work for a long time with artificial neural networks, you begin to see patterns emerge. One of these touches on fundamental aspects of intelligence. As I haven't seen it described very often, I'll set it out here.
The Neocortex & Power Iteration
A look at how the power iteration method for computing eigenvectors may allow brains to extract variance from the world.
Invariance, Scale and Prediction
Thinking about how we can represent a cow.
Why Computer Vision Is Wrong
Computer vision engineers need to be less Linus Torvalds and more Erik Weisz.
Blueprint for Your Brain
Mammalian brains are horrendously complex. But there are some general patterns of organisation. If we consider a systems-level abstraction, can we learn anything about how we think?
State, Camera, Action!
In this post, I go back to basics with reinforcement learning and consider the stupidest form of intelligence.
This post looks at composition. It starts with Lego. It then looks at a theory of why deep neural networks work and how they could be trained. It ends on how brains may embody these theories.
Sounds & Silence
What can we learn about the brain from the common patterns in human-generated audio?
Locality & Hierarchy
Locality. It constrains with ubiquity. But why are we unable to see it?