Main

Intro to AI, Neural nets, measuring intelligence and "natural intelligence" AI - The Dog Walk 015

Subtitles proofreading along with cards and timestamps/chapters will come soon! This is my first attempt at laying the foundation to explain how computers can be made to learn knowledge in the exact same way as biological brains do. Discussing intelligence as a function of complexity (connections x speed), how human brains learn new things, and how that can be replicated by artificial systems. As Always, any views/opinions stated are solely my own.

The Dog Walk

1 month ago

good evening everyone and welcome to another  edition of The Dog Walk! this is actually episode 15, I forget what I said yesterday...  technically, I wasn't wrong but unfortunately one of the recordings that I had made, I found  out when I was checking it out after uploading it to YouTube yesterday, that it's uh not really  usable... so let's just say that this is now going to be episode 15 and yesterday's very quick  little chat will be episode 14. so anyway, uh The Dog Walk is basically just m
y occasion  to offload, hopefully daily, my thoughts of what's been going on during the day, while I  am out walking Mystra and Daisy, my two pups that you see in front of you. and hopefully some  of this is thought-provoking or interesting to you. I never talk about or rarely talk about  the same topics, I go from gardening, to AI and everything in between and I hope that you  will enjoy the ride. so speaking of AI, I think that's actually going to be the topic for today,  because I was watchin
g a video from asmongold, again... yes I discovered him only a few months  ago, but I have to admit I have been binging a lot of his content. I find that he actually has  some pretty good takes on several topics. I agree with a lot of the stuff that he says. even if I  don't with others... a few topics. but one of the things that he was talking about today was about  the big controversy uh that's currently going on with this game called PalWorld, which is being  accused of all sorts of nasty thi
ngs, as it is supposedly --or I guess the allegations are that--  it's basically just a ripoff of Pokemon that was procedurally.. that was generated... that was made  through the use of generative AI, and as I'm sure you know AI is a big topic right now, especially  when it comes to artists. there was that big Screen Actors Guild and Writers Guild strike a  few months ago, and I just thought that I would talk a little bit about this topic because some of  the comments and opinions that were voic
ed by his readers mostly ---or sorry readers--- his viewers  mostly, were so completely wrong, they clearly showed that people don't understand what AI is or  how it works, which I guess makes sense... (Being an Engineer,)I don't really understand how all  the different artists, art movements, or how to correctly differentiate from one another this or  that, just because I haven't looked into it... but at least I don't go around voicing my opinions  about these movements and these art styles! bu
t that's beside the point... so I thought I  would clear the air a little bit, cuz some of the things that Zach said also were a little  bit well... not quite correct. he was close, but not quite there on a few things and I want  to talk about this because I actually have some background in this. as I've mentioned in the past,  I do have a degree in that stuff. I did a double major in computer science and space science, and I  was actually lucky enough to, in my fourth year of University, and we
are talking almost 20 years ago  now uh yeah 2024... yeah actually 20 years-- 19 years ago now. I was lucky enough to study under  one of the Pioneers in the field, who was doing a lot of cutting edge research at the time for the  military and the government on Neural networks. so I feel I have some --we're going this way-- I have  some credibility or maybe not credibility but at least some some of the prerequisite knowledge  that most of the people who are commenting on this topic seem to lack
. so yeah that was a little  bit about me. I obviously am not fully up to speed with all the latest and greatest capabilities  of AI, but I do know enough about the basis of it and the basics that I feel I can shed a little  bit of light on some of the topic. so the first thing that I want to discuss when talking about  AI is.. I guess I'll try to debunk a few myths or misconceptions. so starting at the basics, AI  is a very generic term that can mean a lot, a lot of different things. come on gi
rls let's keep  going... and we haven't been through this path for a little while so the girls are probably going to  be extra slow and stopping today because they have a lot of messages to catch up on, so sorry about  that. but anyway so starting with the basics, AI is not what most people think it is. I mean  this might be changing over the last year because of Chat GPT and those other tools that are now  becoming-- reaching the Zeitgeist, so people are starting maybe to change, but I still do
n't think  that people truly grasp the difference between a neural network, a natural intelligence system  and an expert programmed AI. so what do I mean by that is that most people when they think of AI  at the basic level they think of a system where some human came in and basically as an expert,  programmed all the possible scenarios and all the possible responses into the AI, and told --- when  this happens, you do this... when the the client in the chat window says that he has an issue with
  products XYZ, then you pull up these following questions related to products XYZ. if the client  types in the word "agent", you automatically redirect them to an agent blah blah... you  know think of the, at the most basic level, kind of like the you know uh automatic telephone  menus right? to talk about your I don't know... your checking account press one, to talk about  credit cards press two, to talk about mortgages press three, ETC right? so that's what is  called an expert system or an e
xpert AI and that was ---oh there's something there, definitely  pretty sure that's a rabbit---Daisy! yeah see? I can see the rabbit right there I don't know if  you can? Mystra making me... okay well anyway if I can... hello rabbit how's it going buddy? cool!  anyway.... so so yeah that is- that was the first form of "artificial intelligence" (that's been  around for around 50 years) and that's what people still kind of think it is. even with generative  AI being much more advanced than that, I
feel that most people in their mind they're thinking  you know there's a human you know, there's... the Wizard of Oz is behind the curtain pulling  the strings and telling the AI how to react to prompts and basically training up the system uh  for responses. um that's not really true or at least that's not the final form of AI. the final  form of AI and this is... I say the final Form-- this is something that as I said I was studying  almost 20 years ago-- the final form of AI is actually neura
l networks, and the best person I'll  go I'll mention this right away before I forget to mention it because it's important, the best  person that you could possibly ever find in my opinion as a source for information on this topic  or to you know get the Layman to understand it is Ray Kurtzweil. you Pro-- might not know who he  is, Ray Kurtzweil is a very famous uh "futurist" although I don't really like that term... he was  one of the... for I don't know how many years he was the the chief tech
nology Officer of Google...  so if I just mention that out of his biography, excluding everything else, hopefully that at least  piques your interest to say "hey, maybe you know that Kurtzweil guy must know a little bit about  what he's talking about if he made it all the way to the title of Chief technology Officer of Google  right?" So he's written several books, including one of my all-time favorites which is called "how  to create a mind", where he explores technology, AI progress Etc and on
e of his main points, which  is a little bit off topic for AI but I'll still mention it since I'm talking about him at this  point, is Moore's law is not a new thing, and it's not something that we truly understand so I can't  really do justice to his entire argument that he elaborates in a 200 Page or whatever 200 plus page  book in 2 minutes, but he shows in his book that the principle of Moore's Law, which is you know  the density of transistors on a chip will double every 18 months or so I t
hink it was, that's just  the newest iteration of the overall progress of human technology as far as we can go back. and  he literally almost goes back to the invention of fire and he shows that throughout human history  technological progress has been progressing sorry using the same word tce and there's another  rabbit right over there, so Daisy and Mystra are probably going to go for it... yeah so basically,  since the dawn of mankind this has been the case, this has been a thing, and uh we a
re now  realizing-- my God that rabbit really wasn't scared-- sorry for the shaky cam as I mentioned  Mystra is pretty strong, so yeah basically Moore's law is just the computer version of it, where he  (Moore) noticed that transistors were doubling in density which means -twice as many processors  means- twice as much (computing) capacity right? that's just a latest iteration but if you go back  all the way to the first inventions that's always been true and his main point is--- goes back  to e
xponential growth, another topic that I've spoken of in one of my previous videos. maybe if  I become a good Creator I'll put a little card or something to link you to that one... but yeah  his point is because exponential growth is... exponential people always always underestimate  how fast things will change, or how fast things progress. it will just keep getting faster. I  think anyone who's been around for a decade or two or yeah more than a decade let's say will see  this in their daily liv
es, how fast is technology progressing now compared to when you were born...  and because of the exponential growth factor, you are still projecting a linear growth in  your mind when, like Asmongold was saying "oh in 200 years there'll be computers that are  --that have brains like humans" like sure, I agree with the second part of your sentence but  when you factor in exponential growth, think more like 10 20 Max 30 years (not 200), and that's just  based on what we know now not what's actuall
y, how do I say, available in big universities and  the public-- uh sorry-- unknown to the public right? the bigger --whether it's a CIA or research  University-- computers... all that stuff... we don't know how smart they are! but point is that  it is very likely in my mind that there currently are computer systems out there that have the same  processing power as a human brain. now does that mean they are as smart as a human? right now no  because all those systems are still limited by the dom
ains which they can access. we can operate our  body and we're not limited, we're not just applied in doing image analysis, and that's the only thing  that I do as an AI system, or you know optimizing production chains or whatever it is right? we're  more (...) can't say that word in English... polyvalent-- anyway, we're more versatile,  there you go! but that's just the situation now, and once we start to integrate multiple  different AIs from different expert Fields, you know when you integrat
e the vision AI that  Tesla is developing with the optimization AI that Amazon is producing, and when you start  to combine all those things with the neural Nets that Boston Dynamics is creating to control  their robots, you integrate all that together, and pretty soon we'll be obsolete. Now you might  not believe that, so let me explain a little bit about what I mean by processing power. it's very  simple... oh actually yeah no I'm not going to explain how our brain works right now, I'll start 
with processing power. so basically, how we.. processing power is calculated by the quantity of  operations that can be completed within a certain period of time. so how that is done, how that  is calculated, is very easy: you take... let's say you've got 200 transistors and I don't know  why I said 200, let's pick round numbers... let's say you have 100 transistors and each transistor  can do two--- can do 10 operations per second, well that means that overall your system can do 10  * 100 righ
t so 1000 operations every second. Okay, but how do you scale that? you can scale it by  getting more transistors onto your system so that every second with each transistor that does  10 operations well now you have 200 so now you can do 2,000 operations per second, so your system  is twice as capable or twice as smart as the other one. that's one way you can do it... but the  other way you can do it is uh by taking those 100 processors or 100 transistors and speeding them  up so that each of th
em can now do 20 operations per second, so now if you do the math it all comes  as a wash, both of those systems now can do 2,000 operations per second, both of the systems  are twice as capable as the previous one... and those are two very valid ways of increasing  capability right? so human (brains) currently work in a way that we have a lot a lot a lot  of transistors or connectors, our neurons. but since our neurons are working at speed-- at  chemical speed, these neurons are very slow, wher
eas computers are building neural  networks that have less "neurons" right, like we don't have computers right now that have  as many transistors or processor sorry processors right now that have as many transistors as there  are (neurons) in the human brain, not even close, but these processors and transistors operate  at orders of magnitude faster than our chemical brain, at the speed of light or electricity  actually, compared to speed of chemicals... so that's how uh computers-- I realize I'
m missing a  whole bunch of context, I'm all over the place... but that's the first I guess part of the equation  on how it's possible to make a computer that's as smart, as capable as a human or how to increase  the capability of your system... you can go the nature route which is make billions and billions  and billions and billions of transistors that connect to each other slowly AKA the way humans  (and all other biological beings) are getting around, (or the other way) that is by having  le
ss transistors or connectors but that talk to each other much much much faster... and in  the end as with the first example I gave there, it all comes out as a wash. that was weird now sorry something fell on me, as things are melting right now, it's close to freezing  or close to the freezing point... so anyway, now I'm sure you're saying "okay but what does  that have to do with humans or how we understand, or how does that make a computer smart like a  human?" well now let's go to the second
part... so now that you understand kind of how to measure  the speed or the capability of a system, let's talk a little bit about how human brains work. so  you might think that your ideas come from nowhere, but you're probably not a brain scientist... what  has been found out in that field is that the way our brain works is that we have a whole ton of  neurons I don't know the number I know it's not even billions it's like orders of magnitude more  than that but the point is that we have tons a
nd tons of neurons, and these neurons --Mystra!  these neurons, the way they work is they start off uh kind of independent but as we grow and we  learn neurons build connections to other neurons. hopefully you've done enough high school biology  that this is a refresher and not new information for you, but if it is you can look it up yourself  the way it works right? you've got the nucleus of each neural cell, each neuron creates a  bunch of, I believe the term is dendrites, which are basically
the links, and these  dendrites connect to dendrites from other neurons, and that's how you know information and electrical  signals get passed on in our brain. oh I need to get out of here... this is way too dirty...  so that's how it works. now how-- and sorry it's a lot of information to try to organize,  especially assuming no prior knowledge-- so and Mystra is stuck with her leash, let me fix that...  Mystra, lift up your... yeah there you go, okay, Freedom! okay so where was I? yeah so tha
t's  the basic component of the brain is a neuron, and the neuron has connectors that connect to  other neurons, and how our brain works is that there's... it uses oh my I don't want to explain  ion channels and all that... the best way to exp-- the simplest way not the best way, the simplest  way to explain it is that each connection to a neuron or between neurons, those dendrites, if  you use it that path becomes reinforced. imagine a path that you're making through a forest:  the first time a
round there's brush everywhere, it's very difficult and you need to do a lot  of effort to get from point A to point B, but once you've gotten from point A to point B  once now, the path is a little bit clearer right? so the next time you go from point A to point  B it'll be a little bit easier, and then you'll clear a little bit more of the resistance, and  then the next time it'll be a little bit easier, and the time after that a bit more, and a bit  more until eventually the path becomes like
this, a nice paved Road, and the connection is super  easy. now the opposite is also true right? if there is a connection that is not useful and  you don't use a path that you don't walk, well eventually the forest regrows and the path  becomes much more difficult to pass again up until a certain point where if this path, or in  this case this connection between neurons, is not used for long enough it becomes trimmed and pruned  as it's in excess and not necessary and your brain doesn't want to
waste resources (energy/nutrients)  on maintaining it. so that my friends is how we learn, and this has been demonstrated. again I  won't quote the researchers and this and that, if you don't believe me you can look it up  yourself, but it's been shown that that's how all animals or all biological beings that have  brains like ours, that's how they learn. your brain starts when you're a baby by an explosion of  neuron connections that it creates, and eventually you reinforce these connections,
you make new  connections as you learn more stuff, create new paths in your brain, and eventually a brain wants  to be efficient cuz we--- our species did not evolve in our in the Society- environment that  we currently live in, not in abundance, and we evolved in a context where efficiency was key (for  survival). so anyway, point is that, at the basis of the principle, when you learn something your  brain is making a connection easier by reinforcing the ion channels between the two neurons, an
d  decreasing the activation threshold that will make one neuron fire. so I guess I didn't explain that  part, so I'll speak a little bit more about it. what determines whether that path actually works  and you go through and you can figure out how to add 2+ 4. it's... yeah, that's a bad example...  sorry, I'm trying to gather my thoughts... again, no script is an issue yeah so the way it works is  that as you-- from--- okay, how the connection is established or not is dependent on... it's kind 
of like a light switch uh where on one side of the dendrite you will have some... I'm simplifying it  but a certain amount of positive ions let's say, and on the other side on the other dendrite for  the other neuron you'll have a bunch of negative ions and if you accumulate enough electrical  energy, or if when you activate the first neuron that activates enough positive ions that you  can make the connection, that you exceed (the activation threshold)... sorry I'm imagining a  bridge that's k
ind of being built from both sides, and if you put enough energy into the first side  that will allow ---and there's a little Gap at the top of the bridge-- sorry, I'm building the image  (in my mind) as I'm talking about it. so yeah there's a bridge, but there's a gap in the middle  of the bridge between the two neurons, and you're driving your little car for your idea(thought) of  trying to figure out "how do I add two and three?" and your little car, if you give it enough speed,  that or if t
he Gap-- sorry I'm really struggling with trying to make a clear image, and I'm writing  the analogy as I'm speaking... so okay, you've got a bridge with a gap, and whenever a neuron  activates it launches a car at 60 Miles an hour towards that bridge, and if you've reinforced or  you know uh added enough connection, or enough if you've reduced the resistance enough, so if you've  kind of shortened the Gap in the bridge enough that your car can jump and land on the other  side of the bridge, the
n the neuron on the other side of that bridge will now activate itself and  launch a 60 mph car down its neurons to all the other neurons to which it connects, and then you  know those 60 MPH cars will reach that next Gap or those next gaps, and if 60 Miles an hour is fast  enough to jump the hole well then that'll activate the next round down the line. if it's not then it  just stops, the signal stops there, and that's it. so that's how our brain transmits information. I'm  sorry for using this
explanation, I'm sure there's neuroscientists who could explain it much more  --with a much better illustration (analogy) than I can, but hopefully you get what I'm trying to  say. so as you learn things you're making the gap between the neurons (in that specific neural  path) smaller, so that the car can make the jump, and as you don't use a path you're making that  Gap wider and the car will just jump into the hole and the the signal, the potential doesn't  reach the next neuron. so that is e
xactly how resistors and transistors work right? transistors,  as a lot of people point out they're binary, they're one and zeros right? but what was a  big innovation that came with neural networks is they realized that "hey, we can put a variable  resistor or resistance in between each transistor that will determine whether when you activate--  when you switch that first transistor from 0 to 1, does that carry over to the next one or not, based  on the resistance". same Principle as I just sai
d, right? If the resistor is to-- if the resistance  is too high, the Gap in between the two sides of the bridge is too large and the 60 Mph car can't  make the jump, so the next transistor doesn't get activated. so that again is a mechanical version  of the way-- or artificial version of-- exactly how our neurons work! it's not just the one and  zero, yes or no, binary as in the old traditional systems of a computer as what your computer-- the  way your computer works right? now that's not how
neural networks work. neural networks go and they  add that component that changes the resistance, and it's a variable resistance, and it basically  acts the same way as our brain does. so what you do when you "train a system" is you put an  input, and then you just wait for the output of the system, and if the output is what you were  looking for then you give it a pat on the back and say "hey that was great, good job, you gave  me the right answer" and if it isn't well then you also tell it an
d you say "hey that's wrong,  you need to adjust something, try again!" and you don't tell it necessarily like in some older-- in  some paradigms you can or you do, but in general you don't tell it "hey go to transistor 76 and  reduce the resistance by three and run it again", that's not how it works or not how it should  work... the way it works is you just say "hey wrong answer try again" and the computer will  try again it'll change something you don't even know what necessarily but it'll cha
nge it and  then it'll try and it'll give you a new answer and again you give it more feedback... is this the  right answer yes? no? okay try again! or good job! (you do this for as many iterations as required)  and you know what, that's exactly... oh Daisy really wants to go home, sorry, I guess we're  going this way. that's the exact same way that a teacher will teach you how to add right? what's  1+ 1? 3? oh no sorry Timmy try again. what's one plus one? two? oh great, Timmy you got it right!
  let's move on, or let's do another example. so... and you know Timmy or your neural network might  get the the correct answer for one plus one but then you train it more by giving it a new problem  right? what's 2 + 2? 2+ 2 is 3! sorry Timmy, no you're wrong try again! and it'll adjust and  Timmy will adjust the way he thinks, try again... just like your neural network will adjust its  resistance it'll try again and will try and see... hey also when I did... you know now that I've  changed the
se values,(the variable resistances) if I go back to the 1+1 am I still getting the right  answer? yes? no! oh crap okay well clearly what I changed wasn't the right thing to change so  let me try again Etc. and you do this, you iterate this millions and millions of times, and  that's how humans learn, that's how ants learn, that's how computers or neural networks learn.  sorry, I have to get Daisy in. I think it's been long enough of a walk, I guess this will be just  really part one. I wanted
to go a lot further, but consider this the intro on the AI and how  ---but more about how human and natural brains work and how you can emulate that in a computer,  which will be the basis for part two tomorrow, explaining a little bit more about how AI actually  works and why it can learn the same way we can. so thanks for listening, sorry this was a little  bit convoluted... it's a complex topic that has a lot of background knowledge required and I  struggled to summarize what I was trying to
say, but I hope you got something out of it, and  please look forward to part two tomorrow where I'll explain-- go a little bit more about actual  AI now that we've got the background out of the way. thanks for listening, please give me all  your comments suggestions complaints and insults, do all the Social Media stuff. I appreciate your  time and thanks for joining me on the dog walk!

Comments