1. Introduction

Verification is the single biggest challenge in the design of system-on-chip (SoC) devices and reusable IP blocks. Traditional verification methods struggle to keep pace with the ever-increasing size and complexity of designs. The only way to address this problem is to adopt a reuse-oriented, coverage-driven verification methodology built on the rich semantic support of a standard language.

This is the first in a series of four articles outlining just such a solution: a reference verification methodology enabled by the SystemVerilog hardware design and verification language standard. This methodology is documented in a comprehensive book — theVerification Methodology Manual (VMM) for SystemVerilog — jointly authored by ARM and Synopsys.

The VMM for SystemVerilog addresses how to build a scalable, predictable and reusable environment enabling users to take full advantage of assertions, reusability, testbench automation, coverage, formal analysis, and other advanced technologies to help solve their RTL and system-level verification problems. Such an environment increases user verification confidence for first-pass silicon success. The VMM for SystemVerilog enables all SoC and IP projects to establish an effective, efficient and predictable verification process that is based upon the experience of leading industry experts from ARM, Synopsys, and their customers.

2. Verification Challenge

Despite the introduction of successive new verification technologies, the challenges associated with verifying complex SoCs and IP persist. The gap between design capability and verification confidence continues to widen. Repeated studies have shown that half to two-thirds of all SoCs fail at first silicon, with functional bugs a major reason. The ability to buck this trend can be a competitive advantage, saving millions of dollars in engineering charges and sales that would be lost due to late market entry.

These statistics reveal the inherent difficulty in verifying today’s designs. Complex blocks, especially when integrated together, represent huge state spaces that are difficult to exercise under all the possible conditions that will be encountered in the real application of the fabricated chip. Anticipating all possible corner cases and discovering deeply buried design bugs is one of the key verification challenges. Given the realities of project resources and time-to-market demands, it is also critical to find these bugs as early in the process as possible and with as little effort as possible.

3. Verification Techniques in VMM for SystemVerilog

There are many ways in which users write testbenches and assemble verification environments. The common approaches range from fully manual verification, in which each individual directed test is uniquely hand-coded, to advanced testbenches with constrained-random stimulus generation to automate the creation of new tests.

The most effective techniques include the use of functional coverage metrics to enhance the efficiency of automated verification even further. Any verification technique is further enhanced by the use of assertions to check designer intent and aid in diagnosing bugs.

The broad scope of the VMM for SystemVerilog allows it to cover many verification techniques and show how they can be most effectively used together. The right combination of advanced techniques can improve verification thoroughness, increase verification productivity, accelerate the development process and, over the course of a project, consume fewer resources than brute-force approaches. The VMM for SystemVerilog can both enhance existing approaches and form the basis for a comprehensive verification environment that takes full advantage of automation, functional coverage and assertions.

4. Constrained-random Stimulus Generation

Traditional verification relies on directed tests, in which the testbench contains code to explicitly create scenarios, provide stimulus to the design, and check (manually or with self-checks) results at the end of simulation. Directed testbenches may also use a limited amount of randomization, often by creating random data values rather than simply filling in each data element with a predetermined value.

The directed test approach works fairly well for small designs, but a typical SoC design requires thousands of test cases. Even assuming an optimistic development cycle of three days to create and debug each test, a team of ten verification engineers (also an optimistic assumption) would take over a year to complete all the tests. The only way to improve verification productivity is reducing the time it takes to create working tests.

SystemVerilog provides a vast array of language capabilities for describing complex verification environments, including constrained-random stimulus generation, object-oriented programming, multi-threaded capabilities, inter-process communication and synchronization, and functional coverage. These features allow users to develop testbenches that automate the generation of various scenarios for verification.

The VMM for SystemVerilog shows how to use the capabilities of the language to architect an automated testbench. By using the right strategies in setting up the verification environment to take full advantage of automation, the time it takes to create new tests can be dramatically reduced. Using constrained-random stimulus generation, scenarios can be generated in an automated fashion under the control of a set of rules, or constraints, specified by the user.

The key is to craft the testbench such that additional tests can be derived from a relatively small set of base tests by simply modifying test parameters or adding and refining constraints. The gains achieved by such an approach are shown in Figure 1.

systemverilog1

Figure 1 — Automated verification is much more efficient than writing directed tests.

With the directed approach, the amount of time required to generate new tests is relatively constant, so the verified functionality improves roughly linearly over time. With a constrained-random verification environment, there is an up-front cost that must be invested before the first test can be run. This investment entails building into the verification environment the ability to parameterize and constrain the relevant portions of the test such that future tests can be easily derived.

By building randomization into the types of scenarios that are created, not just in the data values that get generated, additional tests are much more likely to hit corner cases and thereby find more design bugs. As discussed in the next section, such tests are also much more likely to hit coverage points, accelerating verification closure.

SystemVerilog provides all the verification constructs necessary for constrained-random testing. The VMM for SystemVerilog provides a comprehensive set of guidelines on how to set up a constrained-random environment and how to use object-oriented programming techniques to write verification components so that they are reusable across the full set of tests on a project and even across multiple projects.

5. Coverage-driven Verification

Coverage metrics serve two critical purposes throughout the verification process. The first is to identify holes in the process by pointing to areas of the design that have not yet been sufficiently verified. This helps to direct the verification effort by answering the key question of what to do next — for example, which directed test to write or how to vary the parameters for constrained-random testing.

Coverage metrics also are an indicator of when verification is thorough enough to tape out. Coverage provides more than a simple yes/no answer; incremental improvement in coverage metrics helps to assess verification progress and thoroughness, leading to the point at which the development team has the confidence to tape out the design. In fact, coverage is so critical that most advanced, automated approaches implement coverage-driven verification, in which coverage metrics guide each step of the process.

Coverage is divided into two main categories: code coverage and functional coverage. Code coverage, in its many forms (line coverage, toggle coverage, expression coverage), is typically an automated process that tells whether all of the code in a particular RTL design description was exercised during a particular simulation run (or set of runs). Code coverage is a necessary but not sufficient component of a reliable verification methodology.

Coverage metrics serve two critical purposes throughout the verification process. The first is to identify holes in the process by pointing to areas of the design that have not yet been sufficiently verified. This helps to direct the verification effort by answering the key question of what to do next — for example, which directed test to write or how to vary the parameters for constrained-random testing.

Coverage metrics also are an indicator of when verification is thorough enough to tape out. Coverage provides more than a simple yes/no answer; incremental improvement in coverage metrics helps to assess verification progress and thoroughness, leading to the point at which the development team has the confidence to tape out the design. In fact, coverage is so critical that most advanced, automated approaches implement coverage-driven verification, in which coverage metrics guide each step of the process.

Coverage is divided into two main categories: code coverage and functional coverage. Code coverage, in its many forms (line coverage, toggle coverage, expression coverage), is typically an automated process that tells whether all of the code in a particular RTL design description was exercised during a particular simulation run (or set of runs). Code coverage is a necessary but not sufficient component of a reliable verification methodology.

systemverilog2

Figure 2 — Automated and manual techniques are applied at different times to fill coverage holes and complete the verification process.

Once the verification environment is ready, the development team can start to run constrained-random tests and hit the first set of coverage points. At this point the tests are typically quite broad, generating a wide range of behavior for the design.

As the number of uncovered points dwindles, more analysis is usually needed to fill the holes. Verification engineers tend to focus on particular corner-case coverage points and carefully vary constraints and parameters to generate tests that hit these points.

Directed tests can play a role in a coverage-driven verification environment. Although constrained-random testing is the primary method, it may be easier to write a directed test to fill a particular coverage hole than to automatically generate a test using constrained-random techniques. The goal is to reach 100% of the defined coverage metrics by any appropriate methods.

As a unified design and verification language, SystemVerilog is the source for all coverage information during the process of verifying a design. The VMM for SystemVerilog addresses the role played by all forms of coverage and shows how these metrics may be used in the verification process to gauge verification thoroughness and identify next steps.

6. Assertions

The capabilities of any verification environment can be enhanced by the addition of assertions, which are statements of design intent. Ideally, as the designer writes the RTL, he or she documents with assertions the requirements on how the design is expected to behave and the assumptions on interfaces with adjoining blocks. Assertions can range from low-level statements about how specific design elements should behave to high-level, end-to-end rules about how information should flow through a design.

Assertions can be specified in many ways, including with general RTL expressions, special statements within hardware verification languages, and the built-in assertion constructs of SystemVerilog. The SystemVerilog approach is ideal, because the assertions can be specified within the verification environment or within the design RTL itself. SystemVerilog assertions enhance verification in three important ways:

  • They provide documentation of the original designer’s functional intent. This can be very useful if the design is reused by another designer, placed into a design repository for later use, or licensed as a commercial IP product.
  • Simulators that support the SystemVerilog assertion constructs can run the assertions during simulation with directed or constrained-random tests. Assertions in simulation increase the observability of the internal behavior, making debug more efficient.
  • SystemVerilog assertions can be read by formal analysis tools, which use mathematical methods to either prove that each assertion can never be violated or find a counterexample showing how the assertion can fail. This allows an easy transition from assertions in simulation to the more comprehensive verification and increased tape-out confidence of formal analysis.

Assertions can influence many parts of the verification process, as shown in Figure 3. In addition to running in simulation and formal verification, some forms of assertions can be mapped into hardware to run in simulation accelerators, emulators, FPGA-based prototypes or even the final SoC.

Links may be established between assertion-based protocol specifications and testbench automation tools that support constrained-random testing. Assertions can also provide coverage metrics that can be combined with other forms of coverage.

systemverilog3

Figure 3 — Assertions are a central part of verification.

By providing a single assertion specification mechanism that works with multiple tools, SystemVerilog enables assertions to be a key part of the verification methodology. TheVMM for SystemVerilog provides guidelines on writing assertions that can be used in simulation, emulation, and formal analysis as well as guidance on how to take maximum advantage of assertions using multiple verification tools.

7. Conclusion

The SystemVerilog language provides all the constructs and features necessary to build a sophisticated verification environment encompassing constrained-random stimulus generation, coverage-driven verification, and assertions. The VMM for SystemVerilogdescribes how to use the appropriate language capabilities to develop such an advanced environment. It also provides a wide range of coding and methodology guidelines of value to both experienced verification engineers and to those who are taking their first steps beyond directed testing.

The second article in this series will focus on RTL verification and define a layered testbench architecture that enables advanced verification techniques and fosters reuse of verification components from project to project. The third article will cover system-level verification, including the interaction between SystemVerilog and SystemC.

The series finale will discuss strategies for adoption, including use of the standard libraries defined in the VMM for SystemVerilog to support the methodology. Together, this series of articles will provide a solid overview of using SystemVerilog for verification of complex SoCs.

Feedback

If you have any suggestion/feedback please email it to feedback@inno-logic.com