Title: Comment on a Medium Article
My Perspective: Back in the 1980s I took a couple of classes in neural networks. It was, at the time, the logical continuation of my interest in AI: we had little machine power, but we had C and Pascal (not to mention FORT and Assembler) and that was going to be enough. The first course was a three months jaunt into building a neural network that could recognize any letter in in the alphabet by either hearing the sound or looking at a printout/drawing of it. This is 1986-87. It was fun, but it also introduced me to the complexity of a neural network and how we actually have to teach computers.
I found the above article while going through rabbit holes on AI a few months ago, and I really liked it because it does a great job of simplifying that first foray into neural networks (which we use, in different forms, for machine learning and “deep learning” (marketing name for machine learning)). It covers what for me became the next long search: to understand cognition and learning (still don’t get the most advanced concepts, but it’s a fascinating hobby and science).
I am hoping that by reading it and mixing it with my previous link on ML, you begin to both see the potential and complexity behind ML (and realize that for most vendors today, it’s just another marketing term).
disclaimers: this sh— stuff is hard AF, but the easy part is to understand how and what cognition works. when i say most vendors are hyping and marketing, i meant no a particular vendor — and of course, none of my clients — but a generalized statement considering the complexity and, still, academic nature of true ML. that first course i took? we barely made it past helping the machine learn both what a B was and how to recognize it. today you can get that in an open source library, at worst. ces’t comment la vie avance…