Artificial Intelligence.

 

What's the deal with artificial intelligence these days anyways? We are pouring billions of dollars and a huge sum of the worlds foremost academic experts and intellectual intelligentsia towards creating some sort of machine god, emergent super-intelligence that perhaps makes the mind of Einstein look like a goldfish. There is a lot of myth connected to this idea of creating an Artificial General Intelligence, a lot of money pouring into it as well.  Lots of money, lots of talent, lots of expectations... investment in a lot of ways. 

Would AGI solve all our problems? Probably not, or maybe yes, AI alignment experts might have a lot to say about this.I'm reminded of the poem "All Watched Over by Machines of Loving Grace"

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

This seems to be the general sentiment of the normies of the bay area. One day at a park I wandered around asking random strangers the question "When artificial general intelligence comes to dominate our life, will it think of humans as pets or pests?" The first person I asked answered "I don't give a shit I'm just trying to enjoy eating my ice cream right now." But every other answer was "pets" which is kind of cute. Realistically it will probably be neither, or some kind of combination of pet and pest and who knows. Sometimes I like to think humans are like jellyfish in a tank trying to understand the minds of the humans that walk by and gawk at them.

I am not an AI alignment expert I'm just a weirdo writing a blog who's stepped their toes into the circle jerk that is the artificial intelligence and rationalist community of the bay area. Of all the existential threats to humanity, the improper development and deployment of artificial intelligence is certainly up there in terms of threat vectors so it makes sense given the potential dangers that we should really consider and pay attention to how the technology evolves. 

Currently, the practical methods we have for creating neural networks and the general approach to machine learning is more or less what most people work on in the field of artificial intelligence. The frontiers so to speak in terms of artificial intelligence and what is currently finding many interesting and more importantly financially lucrative applications are all kind of falling under the same paradigms of thought. More or less the same design philosophies are implemented and developed upon. 

If AGI (Artificial General Intelligence) is the moon, then our current methods to reach it are like climbing a tree to reach the moon. Hey, maybe if you made the tree out of graphene nanoribbons and specifically engineered it a certain way you could possibly reach the moon after a very long period of climbing. The point I'm making is that in terms of artificial intelligence, I don't see many novel philosophies or approaches in terms of design implementations, everybody is all kind of trying to do the same thing and finding applications that way. 

Most artificial intelligence startups are like this. The most effective method to reach the moon is using a rocket we know now, not climbing a tree, but our ancestors before they had an understanding of space and rocketry probably climbed many trees and even mountains to reach the moon to no success. Maybe the next advancement in artificial intelligence will employ new advanced theoretical mathematical concepts that we haven't discovered yet, or have already but haven't put them towards this specific application.

While everyone is climbing trees, some few startups and academic enterprises are going to be blowing themselves up on the launch pad trying to launch a rocket. Each new startup will learn what fuel combinations didn't work for the ones that blew up, and learn how to avoid blowing themselves up by developing better fuel casings and so on and maybe even one day actually end up launching something into the atmosphere.

All the people climbing trees are suddenly going to see this rocket reach the atmosphere, which is orders of magnitude closer to reaching the moon than climbing a tree, and probably switch their methods to try to replicate and build upon this method. (right now its financially lucrative to climb trees because at least you can still pick some fruit I guess) This would represent a massive paradigm shift involving a new technology. Maybe someday one of these rockets will actually reach the moon, and everything will change. 

Who knows, we know a lot less about consciousness than I think we think we do. How it links with artificial intelligence, who knows. There's the Chinese room problem example, and how that links to subjective qualia. Could an artificial intelligence have subjective qualia? Does it even matter? A hydrogen bomb doesn't have conscious thoughts (ok, maybe if your a follower of panpsychism sure it does but then whats even the point of classification then) but it can still blow us up if we mishandle it incorrectly.

I'll leave this piece with one more thought (I have many thoughts on this topic). Is the internet conscious? Could we consider the global hivemind of computer networks across the planet an artificial intelligence of its own? Our brain is a network of neurons all firing together which with all the processes together somehow equate to intelligence. The internet is a lot like this. If so, what is the subjective conscious experience of the internet if it has one? This asks questions entangled with concepts such as qualia that are a bit nebulous and rough around the edges. I am reminded of the novel "Enders Game" and "Speaker for the Dead" in which artificial general intelligence is an emergent phenomena that occurs out of humanities control through the use of the ansible network. Perhaps a nascent artificial intelligence with full subjective experience already exists in the human world, would we be even able to comprehend it? Does it matter? I don't know.

Maybe more people should watch Serial Experiments Lain. 




Comments

  1. Thanks for this Corbo, an arrow in my mental quiver to slay grandiose dreams of tech brothers trying to make small talk. This is Liz btw (pardon my old ass BlogSpot account ..its been a decade )

    ReplyDelete

Post a Comment

Popular Posts