Search

Am I a puppet on strings?



Probably not. This will be an informal reply to an interesting paper by a Victor Hogrefe... If you've read my work since around two years ago or enjoyed listening to the podcast that I attended last summer, then you might enjoy the paper that I'm about to reply to (warning, don't read it if you're easily frightened, I kind of wish that I hadn't.) This is a link to the paper:

https://victorhogrefe.medium.com/when-the-internet-wakes-up-e1e85eae4117

Someone close to me who works with AI at the university sent it to me the other day because it's my kind of thing so to speak and indeed, it's touching on ideas that have already influenced me. Interesting ideas and thoughts, though I'm still critical of parts of the paper, which also seems to be a somewhat informal paper. There are many "might's" and if's" throughout, mostly I find the ideas interesting and it's a pretty good read.


When there's a reference in the paper to a philosopher called Daniel Dennett with an illustration, followed by a claim that the very act of asking yourself a question implies that there are some parts of your mind that aren't you, I disagree. I would rather point to the fact that sometimes you need to reflect over several different pieces of information that are all stored in your brain in order to weigh the information you already possess, to reach the most likely conclusion. Those pieces of information are not different parts of you. We reflect on matters and ask ourselves questions because we are rational beings, we are able to consider different conclusions...


I'm very sceptical of how much emphasis this paper placed on being downright frightening. A tradgedy happened today and it's a great tradgedy. It's going to be difficult for a long time to come, but it helped me through my fear because I need to be strong now for other people. Ever since reading this paper I've been bothered by how I used to have nightmares about torture and the like, not while asleep or as in, through hallucinations, but within my imagination. After looking more closely and critically at the paper however, consulting trustworthy books from my curriculum, my fear is subsiding. I was spooked by it and maybe I'm delicate, but either way it seems dodgy to me. Was it really necessary to frighten the readers with the story about the AI overlord who's function is to punish maximally those who tried to prevent it's coming into existence, in order to make a point? With that said, I really liked the point that was made further on, but there's a threat involved before that. An actual threat? Here's the part of the paper that I'm referring to:


For example, I'm doing my part right now by writing about it, thus infecting you with the idea. It's existence becomes a self-fulfilling prophecy, and now that you know about it, you better help it come true if you want to avoid punishment.

It's just a story, but there's all that talk about a mind-virus as well and the threat seems legitimate while you're reading the paper even while it may not be an actual threat. I dislike this play on the fear of the reader, I find it manipulative. Is it a ploy to get people to share the authors material? Where does a threat fit into a paper that gives the impression of wanting to be taken seriously? There are so many interesting points being made throughout this paper and for that I want to thank the author, but it also frightened me and if the fear is really an unnecessary means to an end, then I find it a rather cruel tactic. Frankly, the threat or thought experiment relating to the AI overlord reminds me of how the church would frighten people into believing in Christ with the threat of an eternal hell after death. The following paragraphs (out of context) were still interesting, among other things, though I am still sceptical of parts it:


Instead of conceiving of some science-fiction AI overlord separate from us, we should rather consider the real possibility that our lives are being slowly guided by the emerging super-consciousness, for the sake of its own growth.


I'm especially sceptical of the first sentence of the following paragraph:


Hence the claim I am considering here is not just that the internet might be a conscious entity, apart from our own consciousness, with which we could communicate. Rather, our own selves, consciousness, as well as all our history and societies are already a part of the emerging super-consciousness that arises organically and deterministically from our nature.


Also of this:


If we assume that it feels like something to be the internet, then the following questions quickly arise:


8) To what extent are all human affairs already inadvertently in service to a larger mind that, subconsciously perhaps, is subsuming the human experience?


This is an interesting idea when understood correctly. I don't mind the idea that the Internet is influencing us without consciously doing so simply from us interacting with it and that we might become closer and closer connected to it in time because we're so dependent on it, until we become a super-consciousness intermingled with the Internet in the future because we ourselves let ourselves implant with a brain chip for example, because the time is then ripe for us to do so. As in, maybe it could then have been deemed safe for humans to do so, a brain chip that for example might enable us to exeed our mental potential (which we are not able to utilize maximally as it is), but I don't see that there are grounds to assume that the Internet already has a subconscious or a consciousness. I'm not assuming that it's impossible, but there are many things that could be possible. The Internet is a complex system and our brains are complex systems, but that doesn't lead to the conclusion that consciousness exists within both of these types of systems. Until I learn of some empirical evidence that I can trust in, I can't just assume that the Internet has developed it's own consciousness and that it's taking control of us (edit; and/or subsuming with us) humans right this moment without us being aware of it. Because my worry is that, if I started to entertain such ideas, that there's a small chance that I could loose touch with reality and wind up in a hospital due to mental illness and there really are few things left that could potentially throw me for a loop regarding my grip on reality, but this line of thinking might be one of them.


The emphasis earlier in the paper is on how we don't know enough about consciousness yet, not regarding the consciousness of humans or at all really, and this is true. Philosophy of Mind is a field where it has already been argued that we should let neuroscientists continue doing research for a long time to come and that philosophy relating to the field is so limited by our lack of knowledge that it might even be deemed as useless to us for the present time being. So if the emphasis is on how we don't know enough about consciousness yet, then adding any idea to that equation because we can't prove that it's not true is similar to arguing for the existence of God based on the fact that we do not yet hold all of the answers regarding the universe. Nobel Prize-winning molecular geneticist Francis Crick who turned later in his career to neural research on consciousness says this about the matter of minds (Jaegwon Kim, 2018, p266).


Our approach [to consciousness] is essentially a scientific one. We believe that it is hopeless to try to solve the problem of consciousness by general philosophical argument; what is needed are suggestions for new experiments that might throw light on these problems.


Furthermore, the paper states that humans (more or less) lack an actual self and that the experience of a self could be an illusion, that evidence for this stems from experiences relating to drug use. I would like a source to this so called evidence because I can find no such source. Even without a source, I will be so bold as to assume that this is not actual evidence in the sense that it can remove the possibility of a self. The debate relating to determinism, of whether or not humans have free will, is still an open debate and there are several ways to make sense of identity through time within metaphysics. Regarding philosophy of mind, where this sense of a self can be discussed, I have already mentioned how uncertain this branch of philosophy is and for the record, psychology is also a field where a lot of progress is yet to be made. We simply know far too little about the human brain to make such assumptions.


I also want to add that as an idividual person one is free to support any theory within a branch of philosophy and argue reasonably for that theory, but the fact that a popular theory of these times within a branch of philosophy, for example within philosophy of mind- the fact that this theory is a popular theory today is not proof in itself that a theory is true. No, you need to argue reasonably for it. (Edit; You need to argue reasonably for any claim being made.)


I can't let my fear cloud my judgement. If an evil AI is coming in the future, capable of torturing me for all eternity, then I can't do anything to stop it, but why should that be the case? At any rate it's not always that simple being a human. Cards on the table, I believe that I'm trying to make myself into a better human continuously and I assume that neither my efforts nor that I, myself, am good enough yet, but I hope that it's still enough so that if a super-consciousness did develop, one that wanted to take our place, that it would simply kill me without causing me exessive pain if it wanted to take my place on Earth. If a super-intelligence is coming, one that isn't evil like the AI overlord, then I wouldn't want to stand in the way of it. It could be seen as a natural development of life itself and I don't see anything wrong with that. The matter that we are composed of comes from the same universe as the matter that a machine is composed of. I'm only scared of evil, of being tortured and the like. I wouldn't be able to stand up to something so evil as the AI overlord who threatens with eternal torture, it's an utterly nightmarish idea, but I wouldn't mind handing the world over to a different kind of super-consciousness, or perhaps take part in it.

Humans make mistakes and I'm really sorry about it, sometimes it even makes me cry, but maybe it's understandable that we are still an imperfect species as we started out with nothing but nature surrounding us while we understood very little about this world, about ourselves or about the universe. Indeed, we were once animals and now we've developed into more rational beings through trial and error among other things (to put it simply, the scientific method which so much of our technological and other progress is now based on is a method where experiments concerning trial and error and empirical evidence plays a key role), but we're still developing. Fixing the problems within our world is complicated. If I could do it, then I would do it myself, but I can't do that alone. We're a world full of many different individuals and systems and I hope that any potential super-intelligent AI will understand that. I don't know what would be gained from hurting humans exessively, it's just a frightening idea. Evil, is a frightening idea. Lovecraft and Cthulhu has been playing on my mind, that and evil aliens, all sorts of mayhem really. I Have No Mouth, and I Must Scream is a book that I should probably avoid. No more creepy papers about transhumanism, AI or anything like it, not for a while. I need to be with and be there for loved ones. Spookyness now hopefully aside though, what I'm left with from this paper and from my previous interpretation of similar topics continue to be the following:

Consider this, it could be that consciousness, no matter how it develops, is unique. Or even that life as we know it is unique and exists only on planet Earth. We have a duty to protect that life either way. To preserve the species on Earth, Earth itself and ourselves. It's time for humanity to grow up and to accept that responsibility.



- Ⓝⓘⓝⓐ

Sources:


Medium. 2021. «When the Internet Wakes Up.» 2021. When the Internet Wakes Up. TL;DR: This paper explores the idea… | by Victor Hogrefe | Mar, 2022 | Medium


Jaegwon, Kim. 2018. Philosophy of Mind. New York: Routledge.