When you work for a long time with artificial neural networks, you begin to see patterns emerge. One of these touches on fundamental aspects of intelligence. As I haven't seen it described very often, I'll set it out here.
I've loved spaCy for a long time but I've only just got my head around how you can structure a text processing pipeline to take full advantage of its power.
A look at how the power iteration method for computing eigenvectors may allow brains to extract variance from the world.
This post continues my explorations of simple intelligence. In this post, we'll consider some elementary sensing cells. We'll then look at whether we can apply local function approximators. Sensors Consider a set of sensing cells, which we'll call "sensors". We have N sensors, where each sensor measures a value over time. This value could be … Continue reading Swift Taylor Approximations
Thinking about how we can represent a cow.
I often find myself thinking deeply about probability. What *is* it? Let’s start by assuming there is some form of local reality at a point of interest. Let’s then assume that the “true” nature of this local reality is unknowable. This may be due to the limits of our senses or the limits of time … Continue reading Probability as Humble Knowledge
I have an Oculus Quest 2. It's really fun. Experiencing things in VR is a completely different experience to standard 2D screens. So how do I start building fun things to experience in VR? Most of my backend programming is performed in Python. This is the home of TensorFlow, Keras, PyTorch, SciKit Learn, OpenCV etc. … Continue reading Playing Around With VR & AI
Computer vision engineers need to be less Linus Torvalds and more Erik Weisz.
Recently I got to thinking about statistics and determinism. By statistics I mean the study of sets of data, typically obtained by measurement across a set of measurements or a population of "things". By determinism I mean processes that always have the same output for the same input. Often in engineering they are presented as … Continue reading Statistics and Determinism
I've often been struck by the inflexibility of neural networks. So much effort goes into training huge billion parameter models, but these all have fixed sized inputs and outputs. What happens if you need to add another input or output? I was thinking about this in the context of the simple matrix algebra of neural … Continue reading Extending Neural Networks