4/19/2023 0 Comments Robotek uprisong![]() ![]() Actions that produce stimuli consistent with fulfilling the program’s primary goal will result in more of that sort of behavior. In the case of utility function, action and stimulus form a sort of feedback loop. In the above model, A is an action and S is a stimulus that results from that action. It speaks to the way that any rational being will make decisions in order to maximize rewards and lowest possible cost. The math that explains why that is Omohundro calls the formula for optimal rational decision making. The current computing infrastructure would be vulnerable to unconstrained systems with these drives,” he writes. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. “We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Omohundro calls this approximate rationality and argues that it’s a faulty notion of design at the core of much contemporary AI development. This sort of computer behavior is anti-social, not fully logical, but not entirely More importantly, these systems don’t worry about costs in terms of relationships, discomfort to others, etc., unless those costs present clear barriers to more primary function. ![]() In such situations, better performance will bring more resources and power to fulfill that primary function more fully, faster, and at greater scale. That’s fine when you are talking about a simple program like Excel but becomes a problem when AI entities capable of rudimentary logic take over weapons, utilities or other dangerous or valuable assets. Put simply, robots are utility function junkies.Įven the smallest input that indicates that they’re performing their primary function better, faster, and at greater scale is enough to prompt them to keep doing more of that regardless of virtually every other consideration. The problem, by Omohundro’s logic, is that we can’t appreciate the obsessive devotion of a computer program to the thing it’s programed to do. Economists call it a utility function, but Omohundro says it’s not that different from the sort of math problem going in the human brain whenever we think about how to get more of what we want at the least amount of cost and risk.įor the most part, we want machines to operate exactly this way. benefit calculation that happens all the time. ![]() Computer programs think of every decision in terms of how the outcome will help them do more of whatever they are supposed to do. Microsoft Excel understands the world in terms of numbers entered into cells and rows autonomous drone pilot systems perceive reality as a bunch calculations and actions that must be performed for the machine to stay in the air and to keep on target. In fact, computer systems perceive the world through a narrow lens, the job they were designed to perform. We think of artificial intelligence programs as somewhat humanlike. Journal of Experimental & Theoretical Artificial Intelligence, ![]() The film is science fiction but a computer scientist and entrepreneur Steven Omohundro says that “anti-social” artificial intelligence in the future is not only possible, but probable, unless we start designing AI systems very differently today. Which opens in theaters on Friday, a sentient computer program embarks on a relentless quest for power, nearly destroying humanity in the process. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |