Chiplets promise a new level of computing capabilities
On the example of Intel, we see how it is more and more difficult to follow Moore’s law, moving from one process to another, writes the portal ZDNet. There are fears that once it will cease to operate in principle, but so far, though with a scratch, but it still allows every two years to double the number of transistors placed on the integrated circuit chip. At the same time, chip developers have to look for new ways to increase the computing power of computers, to which the consumer is accustomed. One of the latest trends in the processing industry is associated with the use of specialized chips, the design of which is optimized for a specific task.
One example is Google Tensor Processing Units (TPU) — made-to-order processor which copes admirably with the mathematical operations necessary for machine learning. Another example is specialized chips, which appeared at the peak of interest in cryptocurrency mining. These chips differ from the traditional CPU (central processing unit, CPU), which is capable of performing a very wide range of computing tasks and can be considered a universal chipset.
Despite the fact that specialized chips or application integrated circuits for special purposes (application-specific integrated circuit, ASIC) like TPU like no other are suitable for solving various computational problems, they have one significant drawback — it is an exorbitant cost of development, says the Director of program management for the creation of silicon architecture of the factory-free chip manufacturer Netronome Bapi Vinnakota.
ASIC today — a device type “system – on-chip” (system-on-a-chip, SoC). As the name implies, it is a single monolithic chip, divided into several parts, each of which performs various tasks, ranging from Central processing and ending with the maintenance of interfaces with USB ports and memory controllers. As the functionality of SoC increases and its area increases, the probability of defects increases, which leads to exceeding the rate of output of defective products and increasing costs.
Thus, it becomes much more difficult to increase the performance of the monolithic structure of ASIC, and the industry has gone to the tricks, breaking them into separate and more compact “chiplets” (chips), which specialize in individual tasks — both General plan and domain-specific (specific to a particular area). The benefit of CPU separation is that with a smaller surface area, the chiplets have greater performance than monolithic SoCs. This was achieved by improving the performance of interconnects for data exchange between chiplets.
According to Vinnakota, the use of chiplets can reduce the cost of developing domain-specific accelerators such as TPU: “the Modern realities of factory-free production of logic are such that taking up the development of a computing accelerator, the customer needs to worry about only one thing: what should be the target chiplet. Now you can choose the best chiplet in its class for almost every function. You can, for example, buy an I / o chiplet and it will be a factory chip. Thus, the customer will significantly reduce the cost of development and testing. They would have been much higher if he had not designed a chiplet, but a full-fledged SoC with a much larger number of transistors and, accordingly, with a more complex layout.”
In addition, the cost of chips can be reduced by compiling new and old generations of chips, even those that are produced on the basis of various norms of technological processes. From the latter depends on the size of the transistors that are Packed into the chip, and therefore their number. According to Vinnakota, the use of chiplet technology to create ASICS along with reducing cost and improving performance will significantly increase performance per watt, which opens up new opportunities for ASIC to solve a wider range of tasks than is possible today.
So what prevents the wider introduction of chiplets? The problem is the lack of a single hardware interface for the interaction of these chipsets, which would allow chip developers to seamlessly mix them. To overcome this problem, the Open Compute project and Neutronome launched the Open Domain-Specific Architecture (ODSA) subproject, which aims to develop an open interface and architecture that allows chiplets made by different manufacturers to work together.
Vinnakota, who also leads the development of the ODSA standard in Netronome, expressed the hope that the introduction of the industry specification will serve as an impetus for the development of a wide range of specialized chips that will allow developers to choose chiplets for a specific class of tasks: “Our ultimate goal is to create a market of chiplets. We focused on creating an open standard that will allow the chiplets to “talk” to each other. For them to work as a single chip, there must be a logical interface between the chiplet and the package.”
According to him, the goal of the ODSA project is to prepare a new logical interface by the beginning of the IV quarter of this year, and then make test chips on its basis. If the calculation is justified, we will witness the appearance on the market of products that will prepare the Foundation for the formation of a new class of chips. “Given that the design cycle is about 12-18 months, we will see the first chiplets with new interfaces somewhere in the beginning of 2021,” the expert said.
Print version (no images) No. Only registered users can leave a comment.