What are Memory Chips?

Blog page
Memory chip is the specific application of the concept of embedded system chip in the memory industry. Therefore, whether it is a system chip or a memory chip, it is through embedding software in a single chip to achieve multi-function and high performance, as well as support for multiple protocols, multiple hardware, and different applications.

I. Types

According to the working principle of memory, there are two kinds of memory chipsDRAM (Dynamic Random Access Memory) and SRAM (Static Random Access Memory).

1. SRAM(Static Random Access Memory)

SRAM stands for Static Random Access Memory. It is static, so it will maintain a value as long as power is supplied. Generally speaking, SRAM is faster than DRAM. This is because SRAM does not have a refresh cycle. Each SRAM memory cell is composed of 6 transistors, while the DRAM memory cell is composed of a transistor and a capacitor. In contrast, DRAM has a higher cost per memory cell than SRAM. Based on this reasoning, it can be concluded that the density of DRAM in a given fixed area is greater than that of SRAM.

SRAM is often used for high-speed buffer memory because it has a higher speed, and DRAM is often used for main memory in PCs because of its higher density.

2. DRAM(Dynamic Random Access Memory)

DRAM stands for Dynamic Random Access Memory. This is a semiconductor memory that stores in the form of an electric charge. Each memory cell in DRAM consists of a transistor and a capacitor. Data is stored in capacitors. The capacitor will lose charge due to leakage, and therefore the DRAM device is unstable. In order to store data in memory, DRAM devices must be refreshed regularly.

According to the classification of memory in the computer memory market, SDRAM, DDR SDRAM, and RDRAM also belong to the DRAM category.

2.1 SDRAM (Synchronous Dynamic Random Access Memory)

SDRAM

SDRAM stands for Synchronous Dynamic Random Access Memory. In theory, its speed can be synchronized with the CPU. Since the Pentium era, SDRAM has begun its unshakable dominance. This main structure has continued to this day. Become synonymous with the undisputed memory name in the market.

The SDRAM used in desktop computers is generally a 168-line pin interface with a 64-bit bandwidth and a working voltage of 3.3 volts. The current fastest memory module is 5.5 nanoseconds. Since its original standard is to use the working method of synchronous frequency refreshing of the memory and the CPU, it basically eliminates the waiting time and improves the overall performance of the system.

We know that the core frequency of the CPU=system external frequency×multiplier. The memory is working at the external frequency of the system. The initial external operating frequency of 66MHz seriously affected the overall performance of the system. Chipset manufacturers have successively formulated working standards for 100MHz and 133MHz system external frequencies. In this way, SDRAM memory has three standard specifications of 66MHz (PC66), 100MHz (PC100), and 133MHz (PC133). In order to meet the needs of some overclocking enthusiasts, some memory manufacturers have also introduced PC150 and PC166 memory, such as Kingmax and Micro.

2.2 DDR SDRAM (Double Data Rate Synchronous Dynamic Random Access Memory)

As can be interpreted from the name, this kind of memory is technically closely related to SDRAM. In fact, DDR memory is an enhanced version of SDRAM memory. It mainly uses the rising edge and falling edge of the clock pulse to transmit data, which is equivalent to the working efficiency of twice the original frequency.

But this is only in theory. In practice, the performance improvement brought by DDR memory is not very large, roughly between 10% and 15% after testing. The main reason for such a large contrast is that the current processor and motherboard structure are still designed in accordance with SDRAM, and the potential of DDR has not been fully tapped.

Under 133MHz, DDR memory bandwidth can reach 133×64bit/8×2=2.1GB/s. After the introduction of the 200MHz FSB standard, its bandwidth has reached 200×64bit/8×2=3.2GB/s. as shown in picture 2). At present, DDR400 memory has come out, its powerful offensive cannot be ignored, and its development prospects are very broad.

DDR SDRAM (up) and SDRAM (below)

At present, there are processors and supporting motherboards for DDR applications on the market. However, since it is still in the initial stage of development, the utilization of the massive transmission capacity of DDR memory is not high, and it has not been reflected in practical applications. Therefore, the mature application period of this memory technology has not yet arrived, and it is currently only in the primary application stage.

DDR has only made some enhancements to SDRAM technology, so the production line for SDRAM can easily be converted to DDR production. However, in order to maintain a high data transfer rate for DDR memory, electrical signals must be changed quickly. Therefore, the 2.5V SSTL2 standard is adopted, and its pin count is 184 lines, which is not compatible with SDRAM on the motherboard.

DDR SDRAM has inherent advantages. Therefore, it is only a matter of time to replace SDRAM. I believe that as the DDR memory system becomes more mature, the performance with the SDRAM system structure will increase. At that time, DDR was fully developed. The moment of thousands of households.

The appearance of DDR is very similar to SDRAM, but a closer look is still different. DDR uses 184 lines of gold fingers, while ordinary SDRAM is 168 lines, and there are two notches on the bottom of SDRAM, while DDR has only one notch. In addition, there are two fixed bayonet ports on the edge of DDR memory, while SDRAM For one, the last is that DDR memory particles are slightly thinner than SDRAM.

2.3 RDRAM (Rambus Dynamic Random Access Memory)

RDRAM

RDRAM was originally the future memory development direction strongly promoted by Intel. Its technology introduced RISC (reduced instruction set), relying on high clock frequencies (currently three specifications of 300MHz, 350MHz, and 400MHz) to simplify the amount of data per clock cycle. Therefore, its data channel interface is only 16bit (composed of two 8-bit data channels), which is much lower than SDRAM’s 64bit. Because RDRAM also uses a double-rate transmission structure similar to DDR and uses the rising and falling edges of clock pulses for data transmission. , So the data transmission volume at 300MHz can reach 300×16bit/8×2=1.2GB/s, and it can reach 1.6GB/s at 400MHz. The current mainstream dual-channel PC800MHz RDRAM data transmission volume has reached 3.2GB /s. Compared with 1.05GB/s of SDRAM under 133MHz, it is really attractive.

Because this kind of memory is a brand-new structural system, it is necessary to build a dedicated memory production line for mass production, and it is basically impossible to modify the original production line, so the initial product cost is definitely difficult to compete with DDR, and the production of this The memory must also pay a certain amount of patent money to Rambus according to the output, which has caused various manufacturers to stubbornly hinder the development of RDRAM to a certain extent, but higher bandwidth dual-channel RDRAM will soon appear. Rambus has launched the world’s first RDRAM running at a frequency of 1200MHz, and the peak memory bandwidth will reach 4.8GHz/s. In addition, Rambus will also launch an RDRAM running at 1066MHz. Their design and manufacturing process are not very different from the previous Rambus memory.

 

II. Commercialization

For the industry, memory chips are mainly commercialized in two ways:

1. ASIC technology

ASIC (application-specific integrated circuit) has been widely used in memory and network industries. In addition to greatly improving system processing capabilities and speeding up product development, ASICs are more suitable for mass-produced products and complete standardized designs based on fixed requirements. In the memory industry, ASICs are usually used to implement certain functions of memory product technology, used as accelerators, or alleviate the overall performance degradation of the system caused by the excessive load on the CPU caused by a large number of optimization techniques.

2. FPGA technology

FPGA (Field Programmable Gate Array) is the highest level of application-specific integrated circuits (ASIC). Compared with ASIC, FPGA can further shorten the design cycle, reduce design costs, and have higher design flexibility. When the completed design needs to be changed, the redesign time of ASIC is usually calculated in months, while the redesign time of FPGA is calculated in hours. This allows FPGAs to have market response speeds unmatched by other technology platforms.

The new generation of FPGA has excellent low energy consumption, fast and fast (most tools calculate in picoseconds-ten billionths of a second) characteristics. At the same time, manufacturers can reconfigure the FPGA function modules and I/O modules, and can also program them online to achieve online system reconfiguration. This allows FPGA to build a real-time custom soft-core processor based on computing tasks. Moreover, the FPGA function is not limited, and it can be a memory controller or a processor. The new generation FPGA supports a variety of hardware, with programmable I/O, IP (intellectual property), and multi-processor cores. With these comprehensive advantages, FPGAs are used by some memory vendors to develop full-featured products with memory chip architecture.

 

III. Application

The memory chip is mainly focused on the application of enterprise-level memory systems, providing high-quality support for access performance, memory protocols, management platforms, memory media, and multiple applications. With the rapid growth of data and the increasing importance of data to businesses, the data memory market is evolving rapidly. From DAS, NAS, SAN to the virtual data centers, cloud computing, all of them poses great challenges to traditional memory design capabilities.

For memory and data disaster recovery, virtualization, data protection, data security (encryption), data compression, data deduplication, and thin provisioning are increasingly becoming standard functions of the solution. Managing more data with fewer resources is becoming an inevitable trend in the market. However, these optimization functions mentioned above require a lot of CPU resources. How to quickly realize the process of multi-functional productization and ensure the high performance of the optimized system is the market driving force for the development of memory chips.

The memory chip can quickly realize the integration of various memory functions into a single chip to ensure the high performance of the optimized system. This advantage will gradually make the memory chip an ideal technology for online memory, near-line memory, and remote disaster recovery platform.

SoCs with various processor cores and FPGA products that integrate more processing capabilities will play an important role in more and more embedded systems. For the dynamically changing memory market, with the help of FPGA to implement embedded solutions, the system processing capabilities, peripheral circuits, and memory interfaces can be customized to achieve rapid improvement of core competitiveness. FPGAs also have great advantages in design flexibility. FPGAs, usually called “softcore processors-hardware accelerators”, will greatly improve system performance. The memory chip architecture implemented by FPGA will lead to changes in the competitive landscape of mid-to-high-end memory through the improvement of product development capabilities.