Every new technology offers both promise and pitfalls. Whether it’s drones (fun for photos and video - but practical for deliveries?) or self-driving cars (convenient, efficient - but safe enough and what’s the consumer roadmap?) or VR/AR (immersive and exciting - but also distracting and potentially “too real”).
For the past 25 years, Daniel Sieberg has seen it all between CNN, CBS News, Google, and several startups and developed a presentation style unique to himself: the role of tech philosopher. How do we make sense of any new technologies and ensure they are not only right for the times but right for us? What are the practical implications of future innovation and invention - beyond the gimmicks and gee-whiz?
Sieberg cuts through the noise and unpacks all the developments from recent history but also looks ahead to what’s next. His goal is to generate critical thinking and awareness when it comes to where we’re headed in the 21st century. How should we apply anything new - and why? Sieberg is your guide through the complexities of technology and speaks to advancements in language that both experts and amateurs alike can relate to.
I had the opportunity to attend the Defense Advanced Research Projects Agency’s (DARPA) Grand Challenge in 2004. Imagine part off-road rally, part Mad Max, and part science experiment. The goal was to complete the 150-mile course from Barstow, CA, to Primm, NV, with a fully autonomous vehicle.
I can still recall the electricity in the air - and in the myriad technology strapped to the vehicles - on the morning of the event. Up for grabs was a $1M prize and bragging rights to the engineers who succeeded. It was a pioneering moment designed to advance science fiction to science fact.
Not only did a grand total of zero vehicles make it across the finish line, but the furthest any traveled was just more than seven miles (Carnegie Mellon). And it eventually caught on fire. Others went backwards or crashed into trees or simply didn’t go anywhere at all.
Undaunted, for the prestigious roster of academics and technologists it was back to the proverbial drawing board, and not for long. Within a couple years the vehicles had improved enough to complete the desert course and DARPA even expanded to an Urban Grand Challenge, which involved actual streets and not just cow trails.
Were self-driving vehicles just around the corner?
To many of us, 18 years after those heady days in the desert it seems as though self-driving or autonomous vehicles have stalled somewhere between a 21st-century dream and the harsh reality of interacting with drivers, bicycles, pedestrians and everything else on the roads. Not to mention the insurance issues, legislators, technological challenges and price.
We’ve heard about the various companies testing them out (including Google where I worked for six years) and logging millions of miles without incident. When accidents do happen in testing with self-driving vehicles, it can often be because the human driver took over. We even occasionally see them on the roads. I recently witnessed someone literally asleep at the wheel in the car next to my Uber. Yet mainstream adoption continues to lag.
The potential hazards were back in the public domain this past week with the report of criminal charges filed against the human driver of a Tesla on autopilot that killed two people in another car in Gardena, CA, in 2019.
Legal experts believe the liability lies largely with the humans behind the wheel, even if the manufacturer offers technologies that sound like self-driving is in control. But is it any different than a plane on autopilot? We were once afraid to get in elevators without an operator, so perhaps we’ll evolve to be OK with cars acting in the same way?
Perhaps in the future it will be possible for self-driving vehicles to communicate with one another and create a safe distance between one another. Like a network of vehicles that obey the rules and even ensure better traffic efficiency. But reaching critical mass with that number of self-driving vehicles on the road feels a long way off.
In the meantime, countless questions remain:
Humans created technology - therefore all tech is human. Or is it? What about when self-learning AI is designed to operate on its own to develop algorithms without detailed input from humans? What will buyers need to understand about when to allow the vehicle to “take over” and when not to? There will need to be continued human oversight across the board at least for the foreseeable future. Perhaps vehicles will always need to be a hybrid experience with the developers and the drivers - or at least until our levels of trust exceed any worries about the consequences.
I’ve ridden in a self-driving vehicle and while the experience is initially unsettling, it can potentially free up our time for other pursuits (one Mom in a Google promotional video said she enjoyed the opportunity to spend time with her son in the back seat together doing homework). But do we even like to drive? Do we want self-driving vehicles? Will enough people want them?
When Isaac Asimov wrote I, Robot, in 1950 there still weren’t standard seat belts in cars (Ford made them an option starting in 1955). But Asimov posed the following laws of robotics: the first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.
Is that ethos possible with self-driving vehicles? Will future generations look back and say, “they had to drive THEMSELVES somewhere?!” - as though on par with needing to churn your own butter? Will they be available only for the elite and the wealthy and leave others in the dust?
There are still many questions left unanswered. And that’s where humans come in. When I moderated a panel on the future of artificial general intelligence for the World Science Festival in 2019, one of the points raised by the panel is that AI and robots may be great at answering questions, but humans are still critical to asking the right ones.
What are yours?
Call to discuss how we can you help find the right speaker(s) for your organization.
© 2024 Executive Speakers Bureau. All Rights Reserved.
Design and Developed by eBiz Solutions
Executive Speakers Bureau consistently receives praises about our speed and efficiency. From the beginning of your event planning, our extensive online speaker database and resourceful staff allow us to quickly equip you with the best speaker for your event.
Need a last minute speaker? No worries. Our speed and efficiency help us give you ideas for speakers in one hour or less.
Comments
Leave a Comment