January 13, 2023
.

“Artificial intelligence will fundamentally change our world. ”

This is the first sentence in the blurb of Sandra Navidi's “The Future Proof Mindset.” A book that claims to inspire, empower and realize our personal potential in the age of digitization in order to “emerge as a winner” from times of technological upheaval. But is that as simply said as done or just another soft-skill feel-good phrase?

Even before reading, we knew: We want to challenge that.

As a fuel for our discussion and argumentative counterpart, we have “Evolution without us — Will artificial intelligence kill us? “read by Jay Tuck. Dramatic! Both books address the questions of what changes in artificial intelligence (AI) await us in the future and how we as humans can use or manage them.

How did the comparison turn out?

Although the two texts are a bit more complex than a clear “AI utopia/dystopia,” Tuck's placarism in particular suggests a certain tendency: Whether surveillance, AI weapons, cyber wars or mergers of humans and machines — every bad news is in the luggage. Navidi, who discusses AI as potentiating reflection, personal branding and female empowerment, is taking a different stance. A very different understanding of technology.

Not Darwin's Darling anymore?

In Navidi's positive outlook, we can use technology as support, but we are leading the way with our human capabilities. With their network of “connections to several high-profile agents,” tickets to the World Economic Forum and “girlfriends who are highly qualified and successful,” the question is, however, how realistic this privileged view of the world is. Tuck provokes in a completely different way: “It could be that the AI will remain our friend and helper for a long time. [...] But it could also be that AI turns against us. [...] We don't know. ”

Why so dramatic?

Tuck postulates: According to Darwin's “law of evolution, the species that produces more offspring, escapes enemies better and has a higher resistance to disease.” There is no need for research to assess how humans fare compared to AI: Bad. Tuck believes that “humanity has bad cards” and “evolution will continue without us. ”

How justified is this assumption? Some exciting facts about technological developments from Tuck's “Evolution Without Us”:

  • There were “around 1000 robot-assisted surgical procedures worldwide in 2000.” Today, “we are already at more than half a million. “🤖
  • Both the latest Barbie (with microphone and WLAN connection) and the Google high-tech teddy, equipped with “microphones in the ears, cameras in the eyes and a direct connection to the Internet” can react to children and broadcast the conversations from the children's room live to their parents. Thanks to “MagicBand” (access bands equipped with Radio Frequency Identification Technology), Mickey Mouse and Donald Duck in Disney Land can recognize kids, greet them by name and even congratulate them when it's their birthday! 🧸🎙️👶🏼
  • “Everyone's vocal cords have features that are just as unique as a fingerprint. There are also pronunciation and dialect, idioms and rhythm.” This can be used to create voice profiles of individuals using voice recognition. In the past, voice recognition was the responsibility of police and intelligence services, “today every smartphone can do it.” 🤳🔊🗃️
  • So-called “fusion software” can network modern sensors, organize big data and form an overall picture: scraps of speech, silhouettes, GPS positions, facial recognition, ground vibrations and background noise can thus be combined to form a “useful situation report.” Especially useful on battlefields in war zones. 🧭 🧾

With such developments, as Navidi advises, is it really enough to recognize our potential, develop trust in our abilities, position ourselves wisely and market ourselves?

Talk less, do more

Does it help if we all dedicate ourselves to software development, cybersecurity, and big data analytics instead? From our point of view: No. Even though we still think today that anyone with programming expertise “can look forward to their future with relative ease” (D! gitalist, quoted by Navidi), it may be tomorrow that AI will do it itself.

That means we have to work out what differentiates us humans from machines:

If we are unable to reflect and incorporate our insights and attitudes on leadership, networking and equality into technological developments, how should machines learn what is important to us as humans?

Because that will make the difference whether we are perceived by superintelligences as helpful colleagues, cute house cats or error-laden extractors of resources. Speak harshly. 😉

What role will we humans play?

If we're honest and the superintelligences look at us in an emotionally neutral way, they'll find: People are damn error-prone. We have so many needs. Breathing air, water, sleep, safety, pee breaks, comfort, etc. In addition to space and rest, we also need a good 18 years until we are halfway functional. There are also feelings, mood swings, ambition, competitive pressure, exuberance and stress. We also get sick — not to mention our tendency to forget, completely without a hard drive.

There are many tasks for which AI is better suited than a human

While people are first born (which is a laborious procedure in itself), grow up as children, adolescents mature and have to be trained, and then show significant signs of use after a few years in the work process 🤪, AI can work through. We have to admit today: Machines are the more robust drone pilots, the more stress-resistant and superior chess masters and have the quieter hands as surgical assistants.

Will we humans just become “babysitters for fully automated systems”? And how much longer is that necessary before machines become autonomous? By then, according to Tuck, the AI will decide whether people are useful to it. When a mole destroys our garden or the maggot destroys our car cables — we step in. Tuck believes AI acts in a similar way.

While people struggle with decisions and leave logic behind for emotional reasons, AI simply calculates the “costs of waiting time.” She solves the tasks we set for her as quickly as possible. But what happens if we don't adequately define the “program goals” and “evaluation criteria”?

example: If the ultimate goal of AI used in urban planning were the smooth flow of traffic, lost lives due to traffic accidents might be irrelevant as a factor. “The value of human life” is not a defined parameter.

That could cause problems.

“Trillions of bits and bytes of individual data are now in the care of artificial intelligence. Such quantities cannot be controlled by human management. This is only possible with powerful and learnable software — with machines that manage everything, with machines that are many times smarter than we are. We're building a monster.” 🧠🧬🧟 ‍ ♀️ But don't worry, everything went well with Frankenstein and Jurassic Park. 😉🦖

The exciting thing about it: AI is already ready! Not only that AI can learn independently (unsupervised learning), she also continues: “Terrifyingly, the lines that the software wrote at DeepMind were incomprehensible to its human masters.” (Tuck)

Who will be the winner?

Let's take bets, here are some expert opinions:

  • Shane Legg, co-founder of the British AI company Deep Mind Expects: “One day humanity will cease to exist” and “technology is likely to be involved.” He also believes “it is the biggest risk of this century” and more dangerous than atomic physics.
  • Astrophysicist Stephen Hawking reinforced this fear: “AI could be humanity's greatest achievement.” “It could also be our last. ”
  • And Elon Musk also says: “AI is probably the biggest threat to our existence.”

What influence can we (still) have?

In order to avoid such “independence,” we must feed the databases properly and program algorithms accordingly. What is actually required for this? And who can do that?

Navidi and Professor Dr. Werner (CEO and Medical Director of Essen University Hospital), quoted by Tuck, agree on one thing: “We should focus our efforts on capabilities that cannot be achieved by machines in a comparable way in the foreseeable future. ”

Shouldn't we be more concerned then?

It is no secret that Germany is missing out on the age of big data and AI in many places. The German Federal Intelligence Service (BND), for example, is not fully capable of acting without the support of US partner services — “according to its own statement.” In the short and medium term, German spies depend on the goodwill of the Americans. The BND has neither technology nor personnel to comprehensively tap into global data networks, let alone evaluate them sensibly. Oops.

But what can we do and demand in practice?

  • It needs access to future-oriented education and training opportunities to expand technological capabilities.
  • Decisions should be made on high-quality, intersectional data in order to avoid pre-programmed prejudices (e.g. women recruiting) or to make false deductions (e.g. diagnosis of skin cancer on black skin).
  • If big data discriminates against groups, medical breakthroughs, innovations and product developments will also not be tailored to them, because we know that airbags or drug doses were designed for the male body for a long time because we did not take into account the mass of women.
  • Last but not least: Develop a future-proof mindset! Whatever that means for you personally.

In our Book Circle discussion at 55BirchStreet, we answered the question for ourselves as a team:

“What is 55BirchStreet's future-proof mindset? ”

For us, it doesn't mean quickly retraining to become a data broker or drone pilot. But it has an influence on which topics we dedicate ourselves to and what we continue our education and also how we select our partners, customers and projects in order to live up to our social responsibility: millions of dollars in the development of killer robot bumblebees or monitoring small children with a spy teddy bear, no thanks!

For us, it also means:

#faszinationzukunft — continue to be inquisitive and curious and think critically in “futurologist mode”, and deal with megatrends, among other things

#creativesolutions — be flexible, proactive and courageous and ask the question again and again: “What's wrong? “to think in terms of solutions rather than problems

#EQ — to seek inspiration in human encounters and build relationships through sincere interest

#innovationdurchkooperation — creating new impulses in interaction and giving time to philosophize

#ownership — empower the entire team to think entrepreneurially

#fehlerkultur — Identifying potential, trying it out and remaining enthusiastic despite failures

#resilience — Learn how to deal with uncertainty and change to stay healthy and motivated

This list will never be complete. And yet such a “future proof mindset” can help us to better understand complex issues of our time, such as how ethics and AI are connected, what dangers lie behind data monopolies, how professions and cooperation will be shaped in the future, or what skills children should learn today. None of these questions can be answered so specifically in one of these books. Navidi's tip may help with this: “How you formulate problems determines how you solve them. ”

Our recommendation

If you regularly study these topics, it's not particularly enlightening to read Navidi's bestselling book. She says herself that “it is primarily aimed at people who are not “techies” and work in the analog world, which in any case provides a good introduction to the topic.

Tuck's book was polarizing and eye-catching, but it served up new technological advances for us and made us think! Not too bad. However, there is certainly no neutral discussion of the topic here.

What is your personal future proof mindset? We are looking forward to the exchange!

Exciting stuff to read on 📚

  • Cathy O'Neil: Weapons of Math Destruction
  • “Human Is The Next Big Thing” — D! gitalist, 11.01.2018
  • Yuval Noah Harari — Homo Deus
  • Ray Dalio — The Principles of Success

Explore more

June 27, 2025

The world in 100 years? Visions of the future in books, films and series

Read more

May 27, 2025

“Amusing Ourselves to Death”: Screens as a sensory organ?

Read more

April 10, 2025

Microsoft, perfectionism and traveling cats: Our reading list for spring 2025

Read more

Get in touch

Du hast eine Frage oder möchtest herausfinden, wie wir zusammenarbeiten können?
Melde dich gerne hier oder über LinkedIn bei uns – wir freuen uns, von dir zu hören

Martin Orthen

Coffee’s on you, the rest is on us.

martin.orthen@55birchstreet.com