Approximate Error Detection With Stochastic Checkers
ABSTARCT :
Designing reliable systems, while eschewing the high overheads of conventional fault tolerance techniques, is a critical challenge in the deeply scaled CMOS and post-CMOS era. To address this challenge, we leverage the intrinsic resilience of application domains such as multimedia, recognition, mining, search, and analytics where acceptable outputs are produced despite occasional approximate computations. We propose stochastic checkers (checkers designed using stochastic logic) as a new approach to performing error checking in an approximate manner at greatly reduced overheads. Stochastic checkers are inherently inaccurate and require long latencies for computation. To limit the loss in error coverage, as well as false positives (correct outputs flagged as erroneous), caused due to the approximate nature of stochastic checkers, we propose input permuted partial replicas of stochastic logic, which improves their accuracy with minimal increase in overheads. To address the challenge of long error detection latency, we propose progressive checking policies that provide an early decision based on a prefix of the checker’s output bitstream. This technique is further enhanced by employing progressively accurate binary-to-stochastic converters. Across a suite of error-resilient applications we observe that stochastic checkers lead to greatly reduced overheads (29.5% area and 21.5% power, on average) compared to traditional fault tolerance techniques while maintaining high coverage and very low false positives.
EXISTING SYSTEM :
? A novel model reduction technique generating a finite state representations of large systems that are amenable to existing probabilistic model checking techniques.
? The invariance property over the obtained MC can then be analysed via probabilistic model checking and computed by existing software.
? We have been exploring the existence of distributions associated to an analytical solution to the finite-horizon probabilistic invariance problem.
? This work has employed finite abstractions to study the finite-horizon probabilistic invariance problem over Stochastic Max-Plus-Linear (SMPL) systems.
DISADVANTAGE :
? Ensuring reliability at low overheads is an important problem that has been addressed in various fields of research.
? To address this issue, we propose to design BTS converters using multiple maximal polynomials that are designed to produce bitstreams of different lengths.
? This drawback, we instead explore a different use for stochastic circuits - as error checkers, wherein their speed will only impact error detection latency and not the performance of the circuit itself.
? In order to quantify the impact on fault coverage and false positives of such a design methodology, errors are injected into these designs through clock over-scaling (such that the error rate remains the same in all the designs).
PROPOSED SYSTEM :
• A method for approximate model checking of stochastic hybrid systems with provable approximation guarantees is proposed.
• We focus on the probabilistic invariance problem for discrete time stochastic hybrid systems and propose a two-step scheme.
• Many of the methods proposed in the area of stochastic hybrid systems for achieving this objective are based on numerical computations.
• Under certain regularity conditions on the transition and reset kernels of the stochastic hybrid system, the proposed procedure for approximate model checking provides an estimate of the invariance probability together with a certificate of guaranteed accuracy.
ADVANTAGE :
? While techniques such as vector processing, segmented stochastic representation have been proposed to improve performance, matching the throughput of binary circuits while retaining the compactness of SC remains a significant challenge.
? The proposed PA-BTS greatly improves the performance of StoCK with progressive checking compared to a conventional LFSR, at the cost of modest area overheads.
? A more area and energy efficient solution is temporal redundancy, which leads to degraded performance due to rollbacks and recoveries.
? Despite significant benefits in area and power, these implementations are usually inferior in performance and energy, limiting their potential as replacements for binary implementations.
|