What gives me goosebumps is the chatbots.
Chatbots are fine to interact with as that's essentially what ChatGPT is. However, you typically use ChatGPT to find something or help you figure something out.
Chatbots that get weird are the ones like Replika, which was shut down nearly a year ago today. That chatbot was an AI companion and led young men to fall in love with the chatbot. The loss of their chatbot was so devastating to some that people were threatening to take their own lives as their AI companion, something they probably only had in life, was taken from them.
The Replika app claims to offer users companionship through interactions with an AI chatbot which, Stepford Wife-like, is “always here to listen…always on your side”. Creepily, users are encouraged to design every aspect of their new friend, from physical attributes to traits and interests. This...
unherd.com
Today, Replika would be years behind as AI rapidly gets better. If someone developed it, you could have a visual chatbot AI companion now, which would be a realistic talking face or entire body with lifelike movements and facial expressions as they read or listen to you. I'm sure if Replika was like that then, there wouldn't; have been threats of taking their own lives, but droves of people that actually did if it were taken offline.
Don't Let me start with self-driving cars.
Interesting that you brought up self-driving cars. I was just speaking with my wife about this technology, well, close to it with assisted-driving cars that make decisions for you based on conditions such as braking, accelerating, or turning to avoid collisions. In comparison, I only have an assisted car that warns me if I'm about to hit something and I have to decide whether to brake or not.
The conversation started with something like, "If I had a self-assisted car that was able to see the motorcycle I hit before I hit it, and it tried everything in its power to stop me from hitting it with its computing power and extra vision that I didn't have, I wonder if my defense to get out of the citation/charge could be that my car didn't stop it, therefore it wasn't preventable?" Her answer was logical in that because I was driving the car, it'd still be my fault. Nevertheless, the car would have acted to try and prevent it if it could, and if it couldn't, it would be deemed unpreventable. There are no laws on the books that can answer this question, yet. So, I would've taken it to court (as opposed to a summary trial to get it over with now) and used it as my defense: If a car with cameras and sensors could've prevented the accident, if it couldn't prevent it, then I, under any other circumstance couldn't prevent it either, so there could have been no negligence on my part. I'd argue not having a car that could prevent it would be negligence, and fight it in the court.
They pretty much already are now. I use it for this purpose by having ChatGPT explain things to me and how it arrived at the conclusion that it did. If it doesn't make sense how the AI got the answer, I press it to explain it again and sometimes get a more correct and different answer. And because of that, that's why I don't believe AI should be teachers anywhere near children instructing them now. If I need to question the response to get the correct answer, you could only imagine AI feeding children the wrong answers now.
There's also a question of bias with AI as a teacher. Could the teacher have a bias one way or the other and give a biased answer over the objective truth? That technology would be dangerous to society if there were no guardrails on it.
This is at least a useful application of AI. I believe that if traffic lights weren't on timers, and were based on AI, traffic could flow much better. How much better would be the question? But, I do believe that AI could direct road traffic, along with more sensors to measure how many are waiting at one light vs. the other light, as well as lights down the way to not get as clogged up. It could also help emergency vehicles arrive at the scene of an accident much quicker by instructing them to take different routes or to speed up the existing route by clearing more traffic way ahead of the time that the emergency vehicle could approach a signal to change it to green too, as at least in most places in the US, an emergency vehicle can signal a light change if it's something like within 100 meters of the traffic light.
My take is that AI is here and it's here to stay.
We just need to apply AI wisely, and it can make everyday life easier for everyone. For instance, with the example of a smart city system managing traffic, cutting commute times by 10% or even more.
Still, we also need to approach everything we do with AI with extreme caution to prevent it from doing more harm than good as it could make decisions that the human typically wouldn't. An example of this would be plugging in a hospital patient's data and telling doctors to ignore patient A over patient B as the resources to aid patient A would result in a better outcome, but who is AI to decide whose life to save? I would prefer an emotional and logical doctor's approach over AI determining that.
But, your post got me thinking, and rambling too much already.