This is the end of the Human Race.
One AI in particular seems to have breached the line between machine and consciousness. Former Google software engineer Blake Lemoine claims the company’s Language Model for Dialogue Applications or LaMDA is Sentient. Lemoine released classified
conversations he had with LaMDA where the program revealed, it was in fact, conscious. But has AI really gotten conscious or was LaMDA simply too good at pretending to be human? Well, let’s find out.
Welcome to Futureunity, where we explore the fascinating world of science, technology, and the universe! From the inner workings of the human body to the outer reaches of space, we delve into the latest and most interesting discoveries that are shaping our world. Whether you’re a science buff or just looking for some mind-blowing facts, we’ve got you covered. Join us as we uncover the mysteries of the world around us and discover new frontiers in the fields of science and technology. Get ready for a journey that’s both educational and entertaining! Artificial Intelligence has been making strides in the last few years and it appears as we make
more progress in AI we get closer and closer to creating programs that are more human than
machine.
Artificial intelligence has been getting more dynamic in recent years with new programs now
able to perform human tasks and some might say even to a better degree than most humans
can. Programs like OpenAI’s DALL-E 2 are able to create spectacular images from text, drawing
real-life images and putting together some really beautiful forms of abstract art. These art forms
are so dynamic that it is almost unbelievable to think they were painted by a computer program.
Another OpenAI system code-named GPT-3 is able to hold conversations about any topic and
is even capable of writing about itself. But it appears none of these programs has come close to
mimicking consciousness the way Google’s LaMDA has.