This post looks at some key ideas for artificial intelligence systems. It acts as a guide to the landmarks on our path to improved computing.
When you work for a long time with artificial neural networks, you begin to see patterns emerge. One of these touches on fundamental aspects of intelligence. As I haven't seen it described very often, I'll set it out here.
A look at how the power iteration method for computing eigenvectors may allow brains to extract variance from the world.
Thinking about how we can represent a cow.
Computer vision engineers need to be less Linus Torvalds and more Erik Weisz.
Mammalian brains are horrendously complex. But there are some general patterns of organisation. If we consider a systems-level abstraction, can we learn anything about how we think?
In this post, I go back to basics with reinforcement learning and consider the stupidest form of intelligence.
This post looks at composition. It starts with Lego. It then looks at a theory of why deep neural networks work and how they could be trained. It ends on how brains may embody these theories.
What can we learn about the brain from the common patterns in human-generated audio?
Locality. It constrains with ubiquity. But why are we unable to see it?