This post continues my explorations of simple intelligence. In this post, we'll consider some elementary sensing cells. We'll then look at whether we can apply local function approximators. Sensors Consider a set of sensing cells, which we'll call "sensors". We have N sensors, where each sensor measures a value over time. This value could be … Continue reading Swift Taylor Approximations
Thinking about how we can represent a cow.
I often find myself thinking deeply about probability. What *is* it? Let’s start by assuming there is some form of local reality at a point of interest. Let’s then assume that the “true” nature of this local reality is unknowable. This may be due to the limits of our senses or the limits of time … Continue reading Probability as Humble Knowledge
I have an Oculus Quest 2. It's really fun. Experiencing things in VR is a completely different experience to standard 2D screens. So how do I start building fun things to experience in VR? Most of my backend programming is performed in Python. This is the home of TensorFlow, Keras, PyTorch, SciKit Learn, OpenCV etc. … Continue reading Playing Around With VR & AI
Computer vision engineers need to be less Linus Torvalds and more Erik Weisz.
Recently I got to thinking about statistics and determinism. By statistics I mean the study of sets of data, typically obtained by measurement across a set of measurements or a population of "things". By determinism I mean processes that always have the same output for the same input. Often in engineering they are presented as … Continue reading Statistics and Determinism
I've often been struck by the inflexibility of neural networks. So much effort goes into training huge billion parameter models, but these all have fixed sized inputs and outputs. What happens if you need to add another input or output? I was thinking about this in the context of the simple matrix algebra of neural … Continue reading Extending Neural Networks
Mammalian brains are horrendously complex. But there are some general patterns of organisation. If we consider a systems-level abstraction, can we learn anything about how we think?
In this post I'll sketch out some vague outlines of a better life for the new year. I'll try to do this without hypocrisy. I'll likely fail.
In this post, I go back to basics with reinforcement learning and consider the stupidest form of intelligence.