Using Machine Learning to Optimize FPGA Layout and Timing – EEJournal

0

As usual, I am amazed at how quickly and how much things have changed in my own life. When I started my career in electronics and computing in 1980, we thought Simple Programmable Logic Devices (PLDs) were pretty cool, not least because their creators managed to extract so many acronyms from the same small collection of letters (engineers are nothing if not fans of acronyms – especially of the TLA variety (three-letter acronym)).

The original PLDs were Programmable Read-Only Memories (PROMs), which first appeared on the scene in 1970. While somewhat straightforward, everyone was too polite to mention. Programmable Logic Arrays (PLA) first became available around 1975, and these were followed by Programmable Network Logic (PAL) and Generic Network Logic (GAL) devices in the late 1970s. Over time, in addition to using programmable fuses to configure these devices, electrically programmable versions, such as EPROMs, and electrically erasable programmable versions, such as EEPROMs, became available.

In many ways, the early days of the PLDs were the equivalent of the Dark Ages for design engineers. In the case of PLDs, for example, we sometimes started by writing an input and output truth table using pencil and paper. Alternatively, we can capture a flowchart or schematic circuit (using the same pencil and a new piece of paper), after which we used things like Boolean equations, De Morgan transformations, and Karnaugh maps to generate our truth table. In the end, we ended up with a tabular text file on a computer defining which fuses should be blown in the PLD. We then fed this file into a device programmer to blow the fuses. Each device programmer had their own file format, so creating one of these text files required intimate knowledge of both the internal architecture of the device and the file format used by the device programmer. (Error checks? Don’t make me laugh!)

It wasn’t until the 1980s that we started to see tools doing the heavy lifting for us, starting with a Joint Electron Device Engineering Council (JEDEC) committee proposing a standard file format for PLD programming. . It didn’t take long for all device programmers to change to accept this JEDEC file format. This was followed by the introduction of PALASM (PAL Assembler), ABEL (Advanced Boolean Expression Language) and CUPL (Common Universal Tool for Programmable Logic), which all involved rudimentary hardware description languages ​​(HDL) coupled with applications. software capable of converting design descriptions in these languages ​​into the corresponding JEDEC files. Oh, the fun we had!

In 1984, Xilinx started talking about a new type of device called an FPGA (field-programmable gate array). Instead of blowing fuses, this device had to be programmed using SRAM configuration cells. The first of these devices, the XC2064, became commercially available in 1985. This device had an 8 × 8 network of configurable logic block (CLB) islands, each containing two 3-input look-up tables (LUTs), all in a “sea” of programmable interconnection.

The point was that there were no FPGA design tools at the start, so it was up to the designers to specify the content of each LUT and how the CLBs were to be connected to each other (and to the main inputs / outputs). device), it was all done by hand (back to pencil and paper again).

Critics of early FPGAs liked to point out that the timing of these devices was not deterministic as there were so many ways to connect things together inside. This was different from PLDs, which were highly deterministic, with I / O delays specified in the data book.

All of these problems disappeared with the introduction of language-driven design (LDD) using HDLs like Verilog and VHDL in conjunction with logic synthesis engines. Now all the designer had to do – besides capturing the design in the register transfer level (RTL) code – was to specify any time constraints like “The delay from input A to input. output Y must not exceed xx nanoseconds: delay from input B to… ”and so on, leaving it to the synthesis engine to“ make it so ”.

Over time, FPGAs have increased in capacity and performance. Today’s larger FPGAs, for example, can contain millions of equivalent gates, which has led to ever-increasing problems with creating the optimal placement of logic functions and routing between them to maximize resource utilization while minimizing routing congestion. All of this brings us to the guys and girls at Plunify, where the nickname of this company is a combination of “PL” (programmable logic) and “unify”.

Plunify, which was founded by Harnhua Ng and Kirvy Teo in 2009, is just a small company, but they have established an oversized footprint (no pun intended) with hundreds of clients, including around 50 companies at the level of the company.

Plunify’s ‘crown jewel’ is a tool called On time, which “sits on top” of existing place-and-route (P&R) and synthesis tools from major FPGA vendors such as Xilinx, Altera (now Intel) and Microchip Technology.

InTime analyzes the RTL code of the design but does not modify it. Instead, it uses sophisticated machine learning algorithms to control existing P&R and synthesis engines to achieve optimal placement and routing, thus achieving time closure. It is important to understand that P&R and synthesis tools have a myriad of control parameters that can be “changed”. Also, since everything is interrelated, changing one setting can improve one aspect of the design while negatively impacting another. The result is an extremely complex multivariate problem that is ideally suited for a machine learning solution. And, speaking of results, they are very impressive, as shown below:

Placement and routing FPGA “before” (using tools from existing vendors) and “after” (by augmenting these tools with InTime) (Image source: Plunify)

InTime, which is integrated with resource management software like LSF and SGE, runs on a single machine or on multiple computers on a network, automatically distributing builds and aggregating results across multiple machines. Additionally, Plunify Cloud allows you to offload builds to Amazon Web Services (AWS) without needing to be a cloud expert.

Now, while terms like ‘artificial intelligence’ and ‘machine learning’ may have a certain ‘buzzword’ cachet, users really don’t care one way or the other. All they care about is whether or not the tool works and whether it increases the quality of results (QoR), improves productivity, reduces development costs, and speeds time to market (TTM). Suffice it to say that InTime scores well for all of these attributes. In the case of some designs, for example, InTime extracted more than a 50% increase in design performance from FPGA tools. InTime also fixed 95% of existing projects with serious location and route failures.

Reminding me that artificial intelligence and machine learning were largely cloistered in academia until developments in algorithms and processing power allowed them to burst onto the scene around 2015, it is It’s amazing to me that Harnhua and Kirvy are planning to apply machine learning to electronic design automation (EDA) as early as 2009. How about you? Are you as impressed as I am?

PS If you want to know more about all this, then in addition to visiting the Plunify website, these little rascals will be doing a poster session – Predict in-place-and-route sync bottlenecks using machine learning – in collaboration with Infineon Technologies during the next Design Automation Conference (DAC), which will be held from December 5 to 9, 2021 in San Francisco.


Source link

Share.

Comments are closed.