Exponential Contingency Explosion Implications for Artificial General Intelligence

Abstract : The failure of complex artificial intelligence (AI) systems seems ubiquitous. To provide a model to describe these shortcomings, we define complexity in terms of a system's sensors and the number of environments or situations in which it performs. The complexity is not looked at in terms of the difficulty of design, but in the final performance of the system as a function of the sensor and environmental count. As the complexity of AI, or any system, increases linearly the contingencies increase exponentially and the number of possible design performances increases as a compound exponential. In this worst case scenario, the exponential increase in contingencies makes the assessment of all contingencies difficult and eventually impossible. As the contingencies grow large, unexpected and undesirable contingencies are all expected to increase in number. This, the worst case scenario, is highly connected, or conjunctive. Contingencies grow linearly with respect to complexity for systems loosely connected, or disjunctive. Mitigation of unexpected outcomes in either case can be accomplished using tools such as design expertise and iterative redesign informed by intelligent testing.
 EXISTING SYSTEM :
 ? We can also say that it’s not possible that a 25% larger brain automatically yields superintelligence, because that’s within the range of existing variance. ? There exist instrumentally rational agents which pursue almost any utility function, and they are mostly stable under reflection. ? On a higher level of abstraction, this is saying that there exists great visible headroom for improvement over the human level of intelligence. ? To the extent that we’re skeptical that any further innovations of this sort exist, we might expect the grand returns of human intelligence to be a mostly one-time affair, rather than a repeatable event that scales proportionally with larger brains or further-improved cognitive algorithms.
 DISADVANTAGE :
 ? The problem of how to prevent a global catastrophe associated with the expected development of AI of above human-level intelligence is often characterized as “AI safety”. ? To solve the problem of the relation between global and local solutions, we created a classification of global solutions, which is a simpler task as all global solutions depend on the one main variable: how many different AI systems will be eventually created. ? Different global solutions of the AI safety problem provide different levels of survival as the most plausible outcome. ? However, recovery of technological civilization and thus the ability to recreate AI is possible, so the problem would probably appear again.
 PROPOSED SYSTEM :
 • This requires us to set aside the proposed slowing factor and talk about what a rational agency might do if not slowed. • The process includes the submission of patient information along with the proposed request, along with justification. • There are also proposed regulatory rules that will influence the use of AI in health care. • If these health finance measures are enacted as proposed, they will greatly accelerate the uptake of clinical AI tools and provide significant financial incentives for health care systems to do so as well. • As yet, there are no defenses that have proven to be 100 percent effective, and new defenses are often broken almost as quickly as they are proposed.
 ADVANTAGE :
 ? We used this classification to identify pairs of local and global solutions, which are less risky when combined. ? A virus could be used to destroy chip fabs, shut down the internet, or cut electricity. ? Strategic advantage achieved by narrow AIs produces global unification, before the rise of superintelligent AI, by leveraging preexisting advantage of a nuclear power and increasing first-strike capability. ? One way to gain such a decisive strategic advantage would be if the first AI were created by a superpower (either China or the US) which is already close to world domination. ? Such a world government might appear if one country gained an overwhelming military advantage from a means other than AI.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com