Editor’s note: This blog entry was authored by Laurent Fournier, Manager of Hardware Verification Technologies at IBM Research - Haifa.
When I tell people that I do pre-silicon verification for a living…well, you can imagine the yawns. Yet, without me—OK, without teams of people like me—computer functions that we take for granted or think of as simple, like making an ATM withdrawal, might not work so well.
I'll bet then when you go the ATM and take out $20, you don't worry that $20,000 will be credited to your account. We expect that as long as we use the correct language to tell a computer what to do, it will do it: clicking on “Open” will open a file, and 2 + 2 will be 4.
But in reality, a million different factors could put these processes at risk. What if you try to open a network file at the exact same time as someone else? Or what if you have 20 other files already open? What about 200 files? The commands may not have been written to take these factors into account. In fact, developers cannot possibly consider every scenario when they develop a program. That's where verification comes in.
The tools that my team at IBM’s Research Lab in Haifa, Israel develops are meant to increase confidence that a computer works as it should. We can all grasp the importance of this when it comes to our bank accounts, but that’s just one example. Think of computers that help doctors dispense medicine or that dispatch emergency services.
Developing a processor entails two phases. In the first stage, the design phase, developers use HDL (hardware description language) to write instructions that describe how a processor should work. The next phase is the silicon phase. This is when the HDL instructions are transformed into an actual chip.
In between these phases, pre-silicon verification tools check the design before anything physical is built. The tools we develop in the lab generate tests to check that a chip functions as it should. For example, today's processors can execute several instructions simultaneously. But sometimes a mistake in executing one instruction is only revealed when a specific combination of instructions are also completed at the same time. Our process can predict these scenarios before they turn into bugs.
So, how did people verify processors before they had test generators like the ones we
develop?
Life was simpler then—and so were processors. Chip verification tests used to be built manually. As processors became more complex in the late 1980s, IBM built the first automatic test generator called RTPG (random test program generator) as a means to test the architecture for IBM Power processors.
Verification has been one of IBM’s best investments; saving several hundreds of millions of dollars in development costs over the last 20 years.
|
As IBM went on to develop different processors for different architectures, a new tool had to be created for each one. Developers soon realized they needed a tool that could handle any architecture, so the model-based test generator (MBTG) was born.
MBTG is comprised of a generic engine to handle the issues common to all processors and also an engine modeled after the specific architecture being tested. At this point, other companies began contacting IBM to develop tools for their processors—we even helped verify the Intel x86 architecture.
The tools we build today do what's known as dynamic generation. After each instruction is generated, a developer can gauge the exact situation of the processor and then determine the next instruction to test. They have evolved from the random checking done in older verification methods to performing what is called biasing. Biasing allows for randomness in a controlled fashion—basically, we adjust the parameters during testing to ensure that that we cover all bases and find any bugs that might only turn up under certain conditions.
Verification has been one of IBM’s best investments, saving
several hundreds of millions of dollars in development costs over the last 20 years. Our goal is to further simplify the test-generation process by adding an automation layer on top of our verification tools. This will automate the creation of input files, significantly cutting back complex work efforts and hopefully reducing overall verification cycle time.
So, as we expect more from our technology—from banking apps to medical care—our team plans to have the tools in place to verify that they work as they should.
Labels: chip design, dynamic generation, IBM Research - Haifa, Laurent Fournier, model-based test generation, processor verification, random test generator, verification