Search Episodes
Listen, Share, & Support
Listen to the latest episode
Subscribe via iTunes
Subscribe via RSS
Become a fan
Follow on Twitter

Support Us:

Please consider making a donation to help make this podcast possible. Any contribution, great or small, helps tremendously!

 
Subscribe to E-Mail Updates

Related Readings
  • Answers for Aristotle: How Science and Philosophy Can Lead Us to A More Meaningful Life
    Answers for Aristotle: How Science and Philosophy Can Lead Us to A More Meaningful Life
    by Massimo Pigliucci
  • Nonsense on Stilts: How to Tell Science from Bunk
    Nonsense on Stilts: How to Tell Science from Bunk
    by Massimo Pigliucci
  • Denying Evolution: Creationism, Scientism, and the Nature of Science
    Denying Evolution: Creationism, Scientism, and the Nature of Science
    by Massimo Pigliucci
Sunday
Sep042016

RS 167 - Samuel Arbesman on "Why technology is becoming too complex"

Release date: September 4th, 2016

Samuel Arbesman

As the technology we rely on every day becomes increasingly sophisticated, it's getting to the point where it's too complicated to understand -- not just for individual users, but for any human at all. In this episode, Julia talks with complexity scientist Samuel Arbesman, about his new book Overcomplicated: Technology at the Limits of Comprehension, why these unprecedented levels of complexity might be dangerous, and what we should do about it.

Samuel's Book: "Overcomplicated: Technology at the Limits of Comprehension"

Samuel's Pick: "Immortality: The Quest to Live Forever and How It Drives Civilization" by Stephen Cave

Massimo's Conference: Stoicon 2016

Podcast edited by Brent Silk

 

Full Transcripts 

Reader Comments (9)

"I almost think that to a certain degree, we need to take the approach of technological humility. In the scientific world, we're recognizing that there are limits in terms of the things we can understand effectively in physics. I think in technology, we need to recognize from the outset that there's going to be limits in what we can understand – like, even theoretical limits to what we can fully understand."

Yeah, I don't hear any of that from proponents of GMO's. The science is settled, the experts know what they're doing, long-term empirical studies and clinical trials aren't needed because "there’s no plausible mechanism for harm." Guess genetic engineering and biology isn't as complex as, say, the F-35 or the SpaceX rocket that exploded last week.
September 6, 2016 | Unregistered CommenterMax
Hah, speaking of glitches. Let me retry that quote: "I almost think that to a certain degree, we need to take the approach of technological humility. In the scientific world, we're recognizing that there are limits in terms of the things we can understand effectively in physics. I think in technology, we need to recognize from the outset that there's going to be limits in what we can understand – like, even theoretical limits to what we can fully understand."
September 6, 2016 | Unregistered CommenterMax
yeah its true, its geting more complicated, its even add featured that we dont really need.

do we have same idea?
September 9, 2016 | Unregistered Commenterjual kolagit
technologies (like carbon fueled machines or hydro-dams) long ago were too complicated in their effects for our limited grasps of complex processes unfolding over long timescales and wide spaces, for a related essay on more current potential developments see:
https://rsbakker.wordpress.com/2016/09/11/ai-and-the-coming-cognitive-ecological-collapse-a-reply-to-david-krakauer
September 14, 2016 | Unregistered Commenterdmf
I often wonder the same about the complexity of modern technology. However, when you break it down into general terms, most technologies can be explained in ways that anyone with a decent educational background would understand. The concept of a law of physics or a particular technology is not beyond anyone's understanding. I can comprehend all the technologies that are in my cell phone or a rocket ship but i do not have the education or background to reproduce all the intricate details or build one myself. for example, you could read a book like "a briefer history of time" in a few hours and get a very general understanding of a wide range of modern physics which could help demystify many current technologies.

As with AI learning... it is too complex for us to understand all the connections that a "bottom up" program arrives at to learn rules and apply them but we know the kind of outcome we are basically looking for. We are the ones creating the AI program and ultimately then being selective about what kind of code in the next iteration is desirable. Much like how nature puts selective pressures on species. By the time we come out with AI version 13.0 (lets just assume this is the globally accept gold standard) we could then go back into the program and define in detail how it goes about it's learning and applying process. It would be similar to how we have detailed every process in an animal cell.

We wouldn't really have the need to know how every piece of technology works in it's complexity. At the end of the day we basically pick one area of expertise and stick with that to whatever degree we are comfortable with or our professions call for.
September 16, 2016 | Unregistered CommenterJustin
Recent article featuring Massimo Pigliucci.
"Is Artificial Intelligence Permanently Inscrutable?"
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable

It features a chart from a DARPA conference, which illustrates that machine learning techniques with higher prediction accuracy tend to produce less explainable results. It also gives examples of machine learning algorithms picking up on correlations in the training data that may not exist in a different dataset, causing failure. For example: "When answering the question about [whether there are drapes on a window in a given picture of a room], the neural network doesn’t even bother looking for a window. Instead, it first looks to the bottom of the image, and stops looking if it finds a bed. It seems that, in the dataset used to train this neural net, windows with drapes may be found in bedrooms."
"If you don’t know how it works, you don’t know how it will fail. And when they do they fail, in [Engineering Professor] Batra’s experience, 'they fail spectacularly disgracefully.'"

Which is what I've been saying for a while. Machine learning is good for interpolation and bad for extrapolation, like trying to fit a polynomial through a sinusoid.
September 17, 2016 | Unregistered CommenterMax
Thanks For sharing.
You opinions awas really Help Ful
November 6, 2016 | Unregistered CommenterRhinook
good
December 6, 2016 | Unregistered Commenterusa map
LOL ! What if we develop a computer AI so smart and so powerful people actually start worshiping it as a god ?

As Julia points out, the correct answer often times requires complexity. Thus, although complexity makes systems more difficult to understand, we can view complexity as often times unavoidable.

Good physics and physical science also comes from empirical observation. So much of purely theoretical physics and physical science amounts to garbage when actually tested.

Perhaps we should just admit that one or more of our powerful AI creations will probably end up taking over human society.

Maybe Julia should interview Stephen Cave about immortality.
December 9, 2017 | Unregistered CommenterJameson

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Hyperlinks will be created for URLs automatically.