On this episode of ID the Future, host Robert Marks continues his conversation with Oxford University mathematician John Lennox about Lennox’s new book 2084: Artificial Intelligence and the Future of Humanity. Lennox reviews mythology and science fiction writing stretching from the ancient poet Hesiod to the novelist Dan Brown and MIT physicist Max Tegmark. He says that artificial intelligence (AI) predictions down through the ages are all heavily dependent on theological and philosophical presuppositions. He and Marks also discuss AI’s cousin, transhumanism, its surprising history, and its potentially very dark future, including the risk of what C.S. Lewis called “the abolition of man.”
On this episode of ID the Future, host Robert Marks interviews Oxford University mathematician John Lennox on Lennox’s new book 2084: Artificial Intelligence and the Future of Humanity. It’s a wide-ranging discussion about AI’s advantages already being realized, in medicine, for example; AI’s supposed potential to achieve human-like consciousness; ethical issues that AI programmers will have to grapple with; effects that AI will have on the economy and individual workers; and the risks associated with living in an AI world where every movement is tracked. A key question as we move toward this future, says Lennox, is what does it mean to be human?
On this episode of ID the Future, host Andrew McDiarmid and physician and Discovery Institute fellow Dr. Geoffrey Simmons concludes their three-part conversation about Simmons’ new book Are We Here to Recreate Ourselves? The Convergence of Designs. Our own arrival is impossible to explain through evolution, he says, in view of the incredible complexity of our neurological system, and all that had to develop simultaneously with it. Read More ›
On this episode of ID the Future, author and physician Geoffrey Simmons joins host Andrew McDiarmid in a wide-ranging discussion of his new book, Are We Here to Re-Create Ourselves: The Convergence of Designs. From the foresight needed in the design of eyes, to our stereoscopic and redundant hearing systems, to the mysteries of design in the nervous and circulatory systems, signs of engineered design are everywhere in the human body.
On this episode of ID the Future, Andrew McDiarmid catches up with philosopher Jay Richards at the recent COSM conference in greater Seattle. The two discuss the history of George Gilder’s Telecosm conferences and how the first one gave birth to a book Richards edited and contributed to 18 years ago, Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong A.I. Is the “singularity” coming, as Kurzweil argues there and elsewhere, when machines equal and then quickly surpass human intelligence? Does “machine learning” really mean learning? Will “Skynet” wake up? Jay describes Kurzweil’s sunny version of strong AI and the dystopian version. Then he argues the other side, namely that human beings possess something beyond the purely material, something even the most powerful computers will never possess.
On this episode of ID the Future we hear part two of a panel discussion on “The Danger of Totalitarian Science,” held at the July 2018 FreedomFest in Las Vegas. This discussion followed a screening there of the film Human Zoos, written and directed by Dr. John West. In this second episode, Discovery Institute Senior Fellow George Gilder raises concerns about artificial intelligence — but not the usual economic ones. He’s more concerned about the thinking underlying some of the more ambitious attempts at AI — and how it would tend to turn the whole world into one very large yet confining human zoo.
On this episode of ID the Future, Andrew McDiarmid reads an excerpt from a speech prepared by philosopher, mathematician, and trailblazing design theorist William Dembski for the launch of the Walter Bradley Center for Natural and Artificial Intelligence. Dr. Dembski asks whether we need worry about an AI takeover, and says no, there’s no evidence that artificial intelligence (AI) could reach that level, or achieve consciousness, and there’s mounting evidence from both philosophy and the field of artificial intelligence technology that it cannot and will not. “The real worry,” Dembski says, “isn’t that we’ll raise machines to our level, but that we’ll lower humanity to the level of machines.”
On this episode of ID the Future, Robert Crowther talks with author Jay Richards about Richards’ new book The Human Advantage: The Future of American Work in an Age of Smart Machines. Science fiction tantalizes us — and pundits terrorize us — with images of intelligent machines taking over for humans. Really taking over, as in replacing us. Some thinkers even say that’s just the next phase, since we’re machines ourselves. Jay Richards explains how that’s wrong, and there’s a lot more to hope for than to fear in our future with our new smart machines.
On this episode of ID the Future, philosopher Jay Richards continues his conversation with host and historian of science Mike Keas about Henry Kissinger’s recent Atlantic article on “The End of the Enlightenment.” In the piece, Kissinger sounds an alarm over artificial intelligence, and raises questions about machine ethics and the possibility that humans may learn we’re not so special after all. Richards, author of the new book The Human Advantage: The Future of American Work In an Age of Smart Machines, pushes back, explaining how we can continue to use artificial intelligence to our advantage, prudently but without fear of the robot apocalypse or of computers becoming conscious and free. No, Richards argues, those qualities cannot be programmed. They are, and will remain, the human advantage.
On this episode of ID the Future, Jay Richards talks with host Mike Keas about a recent Atlantic article from former National Security Advisor Henry A. Kissinger on “How the Enlightenment Ends” with the rise of artificial intelligence. Richards, whose forthcoming book The Human Advantage: The Future of American Work In an Age of Smart Machines, covers this territory and more, explains that AI is about statistical processing, not budding consciousness; and the ethical concerns it raises are both important yet in some ways not so new.