Smallish Theory of The Only Reasonable Possibility of Human Like AI

a) we have an expectation of non-human things that they can be made to do human things without going through human like training.

b) we think this because there are some behaviors non-humans do that we do. a computer does count as we do. a dog does notice sounds and people similar to us. so we misattribute how this behavior comes about — we think it’s a built in property / ability of the non human thing — and we come to think we have those built in abilities too. we forget just how much training we have to do even for basic things like speaking sentences, counting passed 2, remembering things out of our immediate surroundings and so on. and we don’t really know how much training non-human things go through either.

c) we also take for granted how much we prompt each other for understanding. consider a single conversation you have with someone you know really well. I’ll be you ask them for clarification 50 times in a 5 minute convo. it doesn’t feel like that person is non-intelligent or lacking built in ability because they are prompting in a way that doesn’t disrupt your flow. Our patience for non-humans is incredibly short.

d) we do not appreciate how big of a memory device the physical environment is. Our daily lives are filled with physical barriers, road signs, alerts, maps, guards, schedules and more. imagine your life if you had only the sun, moon, stars and empty land to aid you get around, do your work, find people, food, etc. imagine you lack even one of the senses you think you have. how much would you be able to do? how well could you adjust? how quickly would you figure it out? and would figuring it out involve creating a bunch of physical implements?

e) we do not have the scope of memory/context by an order of magnitude in mind when we think about what computers can be made to do. all of the advances in AI are trivial to date. a single AI system can do 1 task well after many humans have trained it and its consumed a whole life time of thermodynamic power. We don’t have an AI system that can simply switch tasks. I mean it’s not even close. We greatly under appreciate how much memory is stored in the body (not just the brain) and in the environment.

f) the basis of dynamic behavior of humans cannot be the result of lots of special purpose “intelligent” senses/systems that magically find a way to cohere. there must be some basis that is much more general and simple that can absorb a tremendous amount of variation and keep going. and take that variation in and store it embodied in the body and in relation to the physical world. Much simpler due to the fact that most complex creatures manage to do relatively well thermodynamically — whereas computers to do not.

g) AI will only become useful once humans are willing to learn in tandem with AIs, not attempt to get them to learn all the human things. AIs must also gain a physical embodiment at the scale of a living creature — even all the computers on earth aren’t capable of the coherence of a single human. Their embodiment is too brittle and the world we ask them to exist in is highly tailored to the human form.

read original article at https://medium.com/maslo/smallish-theory-of-the-only-reasonable-possibility-of-human-like-ai-af41fdbc46c6?source=rss——artificial_intelligence-5