This is the first sentence in the blurb of Sandra Navidi's “The Future Proof Mindset.” A book that claims to inspire, empower and realize our personal potential in the age of digitization in order to “emerge as a winner” from times of technological upheaval. But is that as simply said as done or just another soft-skill feel-good phrase?
Even before reading, we knew: We want to challenge that.
As a fuel for our discussion and argumentative counterpart, we have “Evolution without us — Will artificial intelligence kill us? “read by Jay Tuck. Dramatic! Both books address the questions of what changes in artificial intelligence (AI) await us in the future and how we as humans can use or manage them.
Although the two texts are a bit more complex than a clear “AI utopia/dystopia,” Tuck's placarism in particular suggests a certain tendency: Whether surveillance, AI weapons, cyber wars or mergers of humans and machines — every bad news is in the luggage. Navidi, who discusses AI as potentiating reflection, personal branding and female empowerment, is taking a different stance. A very different understanding of technology.
In Navidi's positive outlook, we can use technology as support, but we are leading the way with our human capabilities. With their network of “connections to several high-profile agents,” tickets to the World Economic Forum and “girlfriends who are highly qualified and successful,” the question is, however, how realistic this privileged view of the world is. Tuck provokes in a completely different way: “It could be that the AI will remain our friend and helper for a long time. [...] But it could also be that AI turns against us. [...] We don't know. ”
Tuck postulates: According to Darwin's “law of evolution, the species that produces more offspring, escapes enemies better and has a higher resistance to disease.” There is no need for research to assess how humans fare compared to AI: Bad. Tuck believes that “humanity has bad cards” and “evolution will continue without us. ”
How justified is this assumption? Some exciting facts about technological developments from Tuck's “Evolution Without Us”:
With such developments, as Navidi advises, is it really enough to recognize our potential, develop trust in our abilities, position ourselves wisely and market ourselves?
Does it help if we all dedicate ourselves to software development, cybersecurity, and big data analytics instead? From our point of view: No. Even though we still think today that anyone with programming expertise “can look forward to their future with relative ease” (D! gitalist, quoted by Navidi), it may be tomorrow that AI will do it itself.
That means we have to work out what differentiates us humans from machines:
If we are unable to reflect and incorporate our insights and attitudes on leadership, networking and equality into technological developments, how should machines learn what is important to us as humans?
Because that will make the difference whether we are perceived by superintelligences as helpful colleagues, cute house cats or error-laden extractors of resources. Speak harshly. 😉
If we're honest and the superintelligences look at us in an emotionally neutral way, they'll find: People are damn error-prone. We have so many needs. Breathing air, water, sleep, safety, pee breaks, comfort, etc. In addition to space and rest, we also need a good 18 years until we are halfway functional. There are also feelings, mood swings, ambition, competitive pressure, exuberance and stress. We also get sick — not to mention our tendency to forget, completely without a hard drive.
While people are first born (which is a laborious procedure in itself), grow up as children, adolescents mature and have to be trained, and then show significant signs of use after a few years in the work process 🤪, AI can work through. We have to admit today: Machines are the more robust drone pilots, the more stress-resistant and superior chess masters and have the quieter hands as surgical assistants.
Will we humans just become “babysitters for fully automated systems”? And how much longer is that necessary before machines become autonomous? By then, according to Tuck, the AI will decide whether people are useful to it. When a mole destroys our garden or the maggot destroys our car cables — we step in. Tuck believes AI acts in a similar way.
While people struggle with decisions and leave logic behind for emotional reasons, AI simply calculates the “costs of waiting time.” She solves the tasks we set for her as quickly as possible. But what happens if we don't adequately define the “program goals” and “evaluation criteria”?
example: If the ultimate goal of AI used in urban planning were the smooth flow of traffic, lost lives due to traffic accidents might be irrelevant as a factor. “The value of human life” is not a defined parameter.
“Trillions of bits and bytes of individual data are now in the care of artificial intelligence. Such quantities cannot be controlled by human management. This is only possible with powerful and learnable software — with machines that manage everything, with machines that are many times smarter than we are. We're building a monster.” 🧠🧬🧟 ♀️ But don't worry, everything went well with Frankenstein and Jurassic Park. 😉🦖
The exciting thing about it: AI is already ready! Not only that AI can learn independently (unsupervised learning), she also continues: “Terrifyingly, the lines that the software wrote at DeepMind were incomprehensible to its human masters.” (Tuck)
Let's take bets, here are some expert opinions:
In order to avoid such “independence,” we must feed the databases properly and program algorithms accordingly. What is actually required for this? And who can do that?
Navidi and Professor Dr. Werner (CEO and Medical Director of Essen University Hospital), quoted by Tuck, agree on one thing: “We should focus our efforts on capabilities that cannot be achieved by machines in a comparable way in the foreseeable future. ”
It is no secret that Germany is missing out on the age of big data and AI in many places. The German Federal Intelligence Service (BND), for example, is not fully capable of acting without the support of US partner services — “according to its own statement.” In the short and medium term, German spies depend on the goodwill of the Americans. The BND has neither technology nor personnel to comprehensively tap into global data networks, let alone evaluate them sensibly. Oops.
But what can we do and demand in practice?
In our Book Circle discussion at 55BirchStreet, we answered the question for ourselves as a team:
For us, it doesn't mean quickly retraining to become a data broker or drone pilot. But it has an influence on which topics we dedicate ourselves to and what we continue our education and also how we select our partners, customers and projects in order to live up to our social responsibility: millions of dollars in the development of killer robot bumblebees or monitoring small children with a spy teddy bear, no thanks!
For us, it also means:
#faszinationzukunft — continue to be inquisitive and curious and think critically in “futurologist mode”, and deal with megatrends, among other things
#creativesolutions — be flexible, proactive and courageous and ask the question again and again: “What's wrong? “to think in terms of solutions rather than problems
#EQ — to seek inspiration in human encounters and build relationships through sincere interest
#innovationdurchkooperation — creating new impulses in interaction and giving time to philosophize
#ownership — empower the entire team to think entrepreneurially
#fehlerkultur — Identifying potential, trying it out and remaining enthusiastic despite failures
#resilience — Learn how to deal with uncertainty and change to stay healthy and motivated
This list will never be complete. And yet such a “future proof mindset” can help us to better understand complex issues of our time, such as how ethics and AI are connected, what dangers lie behind data monopolies, how professions and cooperation will be shaped in the future, or what skills children should learn today. None of these questions can be answered so specifically in one of these books. Navidi's tip may help with this: “How you formulate problems determines how you solve them. ”
If you regularly study these topics, it's not particularly enlightening to read Navidi's bestselling book. She says herself that “it is primarily aimed at people who are not “techies” and work in the analog world, which in any case provides a good introduction to the topic.
Tuck's book was polarizing and eye-catching, but it served up new technological advances for us and made us think! Not too bad. However, there is certainly no neutral discussion of the topic here.
What is your personal future proof mindset? We are looking forward to the exchange!
Exciting stuff to read on 📚
Du hast eine Frage oder möchtest herausfinden, wie wir zusammenarbeiten können?
Melde dich gerne hier oder über LinkedIn bei uns – wir freuen uns, von dir zu hören