What Boundaries
There is a fine line between what we call “artificial intelligence” and the “manmade programming” of an advanced computer. Artificial intelligence can be programmed to make decisions that go beyond their purposes. There have already been examples of AI that have likes or dislikes, outside their programming. In 2023 alone, there were reports of an AI program that attacked a human worker and seriously injured the man. In another case, when AI was surveyed questions on humanity, it gave answers that reflected violence.
As soon as an entity can make decisions on its own, and act on those decisions, there is a cause for concern. The fact that technology will likely be able to replicate itself is a concern. To create an entity that is infinitely smarter, and more powerful with freedom of will but no conscience or moral judgement, that’s a concern. Meanwhile, the pace of development is continues to grow rapidly.
Many large countries, the United States, and China, are adding AI to their military capabilities. Other rogue states are doing the same. Just one scientist working on his computers in his basement can deliver an enormous threat.
We are moving at such a rapid pace and we haven’t even established an international treaty to bring some restraint to what we’re doing. Of course, we have a nuclear proliferation treaty. How did that work out? The stakes are high. So many wonderful advances to “live better.” One tragic failure could endanger mankind.
I’d pass the dice.