Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse Clustered Networks

Abstract : We propose a low-power Content-Addressable Memory (CAM) employing a new algorithm for associativity between the input tag and the corresponding address of the output data. The proposed architecture is based on a recently developed sparse clustered-network using binary connections that onaverage eliminates most of the parallel comparisons performed during a search. Therefore, the dynamic energy consumption of the proposed design is significantly lower compared to that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. TSMC 65 nm CMOS technology was used for simulation purposes. Following a selection of design parameters such as the number of CAM entries, the energy consumption and the search delay of the proposed design are 8%, and 26% of that of the conventional NAND architecture respectively with a 10% area overhead. A design methodology based on the silicon-area and power budgets, and performance requirements is discussed.
 EXISTING SYSTEM :
 ? Once a search data word is applied to the input of a Content Addressable Memory, the matching data word is retrieved within a single clock cycle if it exists. ? There exists at least one active connection from each cluster in P2 toward a neuron in P2, that neuron is activated. ? Although dynamic CMOS circuit techniques can result in low-power and low-cost CAM’s, these designs can suffer from low noise margins, charge sharing, and other problems not to be energy efficient when scaled. ? Thus a new family of associative memories based on SCNs has been recently introduced and implemented using field-programmable gate arrays (FPGAs).
 DISADVANTAGE :
 ? The drawback of this architecture is that the banks can overflow since the length of the words remains the same for all the banks. ? Although dynamic CMOS circuit techniques can result in low-power and low-cost CAMs, these designs can suffer from low noise-margins, charge sharing and other problems. ? Although a NAND-type CAM has the advantage of lower energy consumption compared to that of the NOR-type counterpart, it has two drawbacks: a quadratic delay dependence on the number of cells due to the serial pull-down path, and a low noise margin. ? A drawback of such methods, unlike SCN-CAM, is that as the length of the tags is increased, the cycle time and the circuit complexity of the pre-computation stage is dramatically increased.
 PROPOSED SYSTEM :
 • The dynamic energy consumption of the proposed design is significantly lower compared with that of other conventional low-power Content Addressable Memory design. • To overcome this, novel hierarchical priority encoder architecture suitable for high-density CAM was proposed. • A high-performance AND-type match-line scheme is proposed in, where multiple fan-in AND gates are used for low switching activity along with segmented style match-line evaluation to reduce the energy consumption. • In the proposed design, we demonstrate how it is possible to reduce the number of comparisons to only N bits.
 ADVANTAGE :
 ? In this paper, an extended version is presented that elaborates the effect of the design’s degrees of freedom, and the effect of non-uniformity of the input patterns on energy consumption and the performance. ? The MLs are highly capacitive, a sense amplifier is typically considered for each ML to increase the performance of the search operation. ? A high-performance AND-type match-line scheme is proposed in where multiple-fan-in AND-gates are used for low switching activity along with segmented-style match-line evaluation to reduce the energy consumption. ? In SCN-CAM, we use the NOR-type CAM structure, in order to to take advantage of its better noise margin and the low latency, compared to the NAND-type counterpart.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com