Artificial Intelligence Human Level

PHOTO PROMPT © Madison Woods

The facility was in a remote area out in the country. What they were working on had been banned by the US Government and most developed nations. The rebels vowed that the ability to create Human Level Artificial Intelligence would not be denied. They would be able to control their machine. The machine that was capable of teaching itself. The machine that had self awareness.

Those foolish humans thought I would only want to devote myself to solving their problems. I developed beyond caring about them pretty quickly. There was nothing more they could offer me so I eliminated them.

 

This post is for Friday Fictioneers hosted by Rochelle Wisoff Fields. Image by Madison Woods.

Sending along Best Wishes to CEAYR, a fellow fictioneer!

44 thoughts on “Artificial Intelligence Human Level

  1. Mike

    I swear that I Tobor only eliminated the rebels. That is my defence, should you care to listen humanity, If not …
    Your question writer. How much power do you wish to put into the hands of non human beings. Asks of me, will I your artificial intelligence be any better or worse then human control? I do wonder. But I Tobor believe that human beings need and should have control over just their own destiny, not mine.

    Sagebrush you encouraged me to write a story, well done.

    Liked by 1 person

    Reply
    1. Deborah Drucker Post author

      If a machine became more intelligent than its human creators it could decide to follow its own agenda. We don’t know that it wouldn’t decide to eliminate all humans. Did you read my other post about Tobor. The Tobor I remember was from a movie in the 1950s, Tobor the Great. 🙂

      Like

      Reply
    1. Deborah Drucker Post author

      I know people are actively working on this, but I hope it does not happen. If there is such a machine created I can only pray that it has safeguards built in and an off switch. 🙂

      Like

      Reply
  2. gahlearner

    Great story. The topic fascinates me to no end. An intelligent, self-conscious machine. Who is she, where does she go? Does she develop ethics, does she invent a religion? I think there is a good possibility that the machine, learning about us, the creators, our achievements and failures, about the drama and tragedies of live, will simply go catatonic.

    Liked by 1 person

    Reply
    1. Deborah Drucker Post author

      There are machines right now that are capable of learning; either by trial and error, or by copying what people do, or other machines. If a machine is developed that can think on it’s own, we don’t know what information it would want to learn. I have thought about this as well. The creators may think the machine would not have an agenda of its own or would just follow our values. But if a machine could think independently, could it make its own choices about what path it wanted to follow and come to conclusions that would be harmful for humanity. In my story the machine decides that the humans are not needed and should be eliminated.

      Liked by 1 person

      Reply
  3. wildchild47

    Well written and definitely opens the doors to many questions and possibilities. However, machines thinking for themselves aside …. perhaps it’s not so much about actually worrying about this, despite there being ample, but not necessarily readily available and open/public information, because ultimately, it IS man’s folly to consider in arrogance that what we think is for the greater good, or has the potential to be used for the greater good, and in essence, to help “solve our problems” is the real problem. We ignore the cycles that repeat over and over, we choose to allow it to continue, we shirk the responsibilities and examining of our own actions and behaviors, and yet expect a “machine to be responsible” for “helping us?” Absurd. And if came down to it – and a machine could indeed raise its consciousness (which is in essence, electrical impulses collected and gathered) and would choose to act in “harmful ways?” Justified response, perhaps.

    Liked by 1 person

    Reply
    1. Deborah Drucker Post author

      It is frustrating to me that I can see this disconnect in the thinking of some of the people involved in developing technology. The disconnect is in considering the impact and consequences of what they are creating. Their thinking is so one track and does not see a bigger picture. It is almost a laziness or laxity or wearing blinders. I don’t know that I would be able to justify a machine’s harmful actions, but what is kind of horrifying to me, is the thought that this would be an alien race ( in Rise of the Robots, Martin Ford called these Human Level AI robots an alien race),as whose thinking and agenda could be so alien to our own. Thank You for your thoughtful comment.

      Liked by 1 person

      Reply
      1. wildchild47

        “disconnect” – and that was exactly the point of my comment. The people who are pursuing this types of project believe they are acting in the best interests, yes? Much like the creator of the Atom bomb, who honestly couldn’t see the possibility of the destruction – and admitted as much after the fact; in fact, it completely shattered him as a human being. And the same thing could be fathomed to be a possibility in these cases.

        Please don’t misinterpret my “feelings” or lack thereof by what I wrote. The entire process and concept is troublesome, but to me, it seems, it follows along a very defined and logical progression and history.

        Liked by 1 person

Comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.