A Quick Look at SVAUnit

I've been writing more and more SystemVerilog assertions (SVAs) lately. I find that they are the best way to capture temporal requirements, allowing the rest of the (class-based) testbench to stay timing agnostic. Since assertions are a key part of the checking infrastructure we need to make sure that they're bulletproof. This means that we need to test them to make sure that they're doing what we expect them to do.

The typical flow when writing an assertion is the following (for me, at least):

  1. write an assertion for some specific aspect
  2. check that the assertion fails in some desired scenario (for example, by doing forces)
  3. decide that it's good enough and run a regression with the new assertion
  4. find out that it fails in some obscure corner case, where it really shouldn't be checking at all
  5. patch assertion to avoid the false fail
  6. repeat from step 2 until getting a clean regression

With each iteration where we modify the assertion, it could be the case that we break something. We might introduce new false negatives, that will be found out after after more iterations of the development loop. We might also introduce new false positives because we relax the assertion so much that it doesn't check in all the cases it should. While the first one is bad because we take more time to finish, the second one is worse because we create gaps in our verification effort.

The main reason why something could get broken in the above process is because step 2 becomes so complicated, that we can't realistically run all checks with each loop. A better way to develop assertions would be to implement step 2 in such a way that it's repeatable so that it can be run after each change of the code. Basically, what that would mean is to write unit tests for the assertion.

The SVAUnit library from AMIQ Consulting was written to help developers write better assertions. I've been wanting to test drive it for a while and I finally got a chance. The first place to start is this article on their blog, which contains a complete example. You could give it a read before going further with this post. We'll basically follow the same steps as they did to set up and get a first test, but we'll also do a few iterations of assertion refinement.

Our goal will be to write an assertion for the Wishbone protocol. From the description of the handshake in Section 3.1.3 and from Rule 3.60, a requirement for a Wishbone master is that it has to hold ADR_O stable during a transfer, until it receives an acknowledge from the slave.

We'll want to do test-driven development (TDD) of our property, which means we aren't allowed to write any business code until we have a test that for it. The test is supposed to fail, to show that no code exists that fulfills its requirement. I think in the extreme case we wouldn't even be allowed to write any production code, because a compile error is also a test error. We'll skip this step and write an empty assertion:

interface vgm_wb_master_sva_checker(
input bit CLK_I,
input bit RST_I,
input bit CYC_O,
input bit STB_O,
input bit[31:0] ADR_O,
input bit ACK_I
);
default clocking cb @(posedge CLK_I);
endclocking

default disable iff RST_I;


ADR_HELD_UNTIL_ACK : assert property (
!CLK_I |-> 1 // Something that never matches
);
endinterface

I like to separate SVA checkers from the interface used by an agent for driving and monitoring. Because we only use input signals, we avoid any subtle bugs related to driving both from the agent and from some buggy assertion or assertion support code (e.g. using = instead of ==). Ideally, instead of using an interface we would use a checker (which by definition can only have inputs), but tool support for this language construct is rather sparse. I avoided writing an assertion that trivially passes on every clock cycle, because that would cause tests of the form "assertion succeeds if ..." to pass, even though the code doesn't really do anything. I guess what we write here at this point isn't really that important, but from a purist point of view, it seems that an assertion that never starts is a better choice.

Now that we have our SVA checker, it's time to set up our testing infrastructure. SVAUnit is built on UVM. Tests are class-based, so they need a virtual interface to be able to stimulate the SVA checkers inputs. We'll need another interface with internal signals that can be driven by the tests:

interface wb_bfm();
bit CLK;
bit RST;
bit CYC;
bit STB;
bit[31:0] ADR;
bit ACK;

clocking cb @(posedge CLK);
endclocking

always #1 CLK = ~CLK;
endinterface

All of our unit tests will need need to use this second interface. Since the tests are all classes, we can declare a base class for them that handles the infrastructural aspect of getting the virtual interface:

class test_base extends svaunit_test;
virtual wb_bfm bfm;


function void build_phase(input uvm_phase phase);
super.build_phase(phase);

if (!uvm_config_db #(virtual wb_bfm)::get(null, "*", "bfm", bfm))
`uvm_fatal("BFMERR", "BFM wasn't passed");
endfunction

// ...
endclass

Let's write a first test that should cause the assertion to fail:

class adr_held_until_ack_fail extends test_base;
task test();
bfm.CYC <= 1;
bfm.STB <= 1;
bfm.ADR <= 'h1122_3344;
@(bfm.cb);

bfm.ADR <= 'hffff_0000;
@(bfm.cb);

`pass_if_sva_not_succeeded("ADR_HELD_UNTIL_ACK", "")
endtask

// ...
endclass

Initially I had the crazy idea of using the `fail_sva_succeeded(...) macro, but that's not really the same thing as testing for a pass. That an assertion didn't fail is a necessary condition for it to pass, but it's not sufficient. It also needs to have triggered and finished.

The business code of a unit test should be placed inside the test() task, which will be run by the framework. In this case, we start driving a transfer to address 0x1122_3344, but we change the address in the next cycle, before the slave has a chance to respond.

We'll need to tell the SVAUnit framework which tests to run. This is done in a test suite:

class test_suite extends svaunit_test_suite;
function void build_phase(uvm_phase phase);
super.build_phase(phase);

`add_test(adr_held_until_ack_fail)
endfunction

// ...
endclass

The final piece of the infrastructure is a top module that connects the two interfaces and triggers execution of the unit tests:

module top;
`SVAUNIT_UTILS

`include "test_base.svh"
`include "tests/adr_held_until_ack_fail.svh"
`include "test_suite.svh"


wb_bfm bfm();
vgm_wb_master_sva_checker intf(
bfm.CLK,
bfm.RST,
bfm.CYC,
bfm.STB,
bfm.ADR,
bfm.ACK
);


initial begin
uvm_config_db #(virtual wb_bfm)::set(null, "*", "bfm", bfm);
run_test();
end
endmodule

Now it's time to run our unit test. Since we didn't implement anything in our assertion yet, the test should fail. After starting the run, I found that the reports are way to verbose. In comparison to the lightweight PASS/FAIL information you get from SVUnit, for just one unit test SVAUnit prints quite a few lines.

At first, I got a misleading error message that the assertion is not enabled. I didn't really know how to interpret this. I thought it had something to do with the fact that the assertion can't trigger (since I wrote it like this). I made it trigger, but the message was still there. After digging around through the library code, I saw that an assertion has to be explicitly tagged as an assertion-under-test. It wasn't obvious from the documentation that this is required, so I think this is something that needs to be improved.

I figured the best place to do that would be in the pre_test() method:

class test_base extends svaunit_test;
task pre_test();
vpiw.enable_assertion("ADR_HELD_UNTIL_ACK");
endtask

// ...
endclass

I did this, but nothing changed. After some more digging around, it seems that the pre_test() method never gets executed. This somehow contradicts their blog article and their CDNLive paper. It seems that there was a change in the code at one point and the documentation lagged behind. To get this working, I called pre_test() from the test's run() task:

class adr_held_until_ack_fail extends test_base;
task test();
pre_test();
// ...
endtask

// ...
endclass

Having gotten used to SVUnit's setup() and teardown() methods (which are taken from JUnit), I'm disappointed that something similar doesn't exist here.

The reports use some muddled terminology. They say that "x/y assertions PASSED". This was confusing, since I had only written one assertion. SVAUnit also calls its checks "assertions". This is a poor choice of wording for a framework that is supposed to test assertions.

Toward the end of the report there is a section that mentions that not all checks were exercised. For example, we haven't tried running a test where we expect the assertion to pass. This is a bit too verbose for my taste, since we, as users, should be able to decide what kinds of tests we want to run. We might decide that we don't want to do any pass testing, since any false fails will be caught when running an assertion together with the DUT. These messages are informative and don't affect the result of a test suite run.

In any case, we should now see that the test is failing, as we expected. Moving on with TDD, we want to write the least amount of code that will get this test to pass. Again, if we want to be purists about it, we can write the following implementation for the assertion:

  ADR_HELD_UNTIL_ACK : assert property (
0
);

This assertion always fails, but it doesn't do what we want it to do. We also want it to pass whenever a master keeps the address stable. For this new requirement, we can write a second unit test:

class adr_held_until_ack_pass extends test_base;
task test();
pre_test();

bfm.CYC <= 1;
bfm.STB <= 1;
bfm.ADR <= 'h1122_3344;
@(bfm.cb);

bfm.ACK <= 1;
@(bfm.cb);

#1;
`pass_if_sva_succeeded("ADR_HELD_UNTIL_ACK", "")
endtask

// ...
endclass

Now we'll need to get our hands dirty and write some real code for the assertion:

  ADR_HELD_UNTIL_ACK : assert property (
CYC_O && STB_O |=>
ADR_O == $past(ADR_O) throughout ACK_I [->1]
);

We can argue about whether this is the minimum amount of code that causes the test to pass (hint: it's not). Nevertheless, this implementation passes both tests. Our job is done! Or is it?...

You might have noticed that the assertion requires at least two cycles to determine its result, because of the |=> operator. This implies that Wishbone transfers always take at least two cycles. According to Permission 3.30 and Permission 3.35, this isn't necessarily the case, as a slave could immediately respond to a transfer. This is something we discovered by review now, but it could have been the case that we would have seen some fails in the regression that would have been marked as being false. What do we have to do in this case? We need to relax the assertion.

The first thing we need is to write a test that causes a false fail. For example, we could have two back-to-back transfers, where the first one is finished without any wait states:

class adr_held_until_ack_pass_b2b_no_wait extends test_base;
task test();
pre_test();

bfm.CYC <= 1;
bfm.STB <= 1;
bfm.ADR <= 'h1122_3344;
bfm.ACK <= 1;
@(bfm.cb);

bfm.ADR <= 'hffff_0000;
@(bfm.cb);

`fail_if_sva_not_succeeded("ADR_HELD_UNTIL_ACK", "")
endtask

// ...
endclass

Once we have a test, we can also fix the assertion:

  ADR_HELD_UNTIL_ACK : assert property (
CYC_O && STB_O |->
ACK_I or
##1 (ADR_O == $past(ADR_O) throughout ACK_I [->1])
);

As mentioned in the beginning, this is the point of highest risk, because we could relax the assertion in such a way that it never fails. Because our previous unit tests still pass, we know we still fulfill the old requirements.

Notice that we didn't say that the assertion has to pass. Because it's enough that it doesn't fail, we can even refactor it to not trigger in the case of a transfer without any wait cycles:

  ADR_HELD_UNTIL_ACK : assert property (
CYC_O && STB_O && !ACK_I |=>
ADR_O == $past(ADR_O) throughout ACK_I [->1]
);

It all depends on what our requirements are. If we want the assertion to only check in case of stalled transfers, then we should really use `pass_if_sva_not_started(...). In that case, we wouldn't even need the second back-to-back transfer in our test.

I've uploaded the example code to GitHub for anyone who wants to tinker with it. I've changed the packaging to have one repository per example. I think this is better, because it allows me to deliver in smaller chunks, which should hopefully be easier to read/understand/modify. I'll also try to release the old examples in this format and update the corresponding pages.

After this short TDD tutorial, let's get back to SVAUnit unit itself. The 'one test per class' idea seems nice at first. We can have each test in its own file and that can make things more bite-sized. At the same time, because tests need to be svaunit_tests, we need to provide a constructor for each and everyone of them, which can result in a lot of boilerplate. The `add_test(...) macro also tries to create tests using the factory, so we'll need to implement registration, too. The idea of creating unit tests via the UVM factory is a bit stretched, since I don't see any situation where being able to do overrides would be useful (and if you do want to use this, it sounds like you're overthinking things).

I find the entire decision to base the package on UVM a bit much, since it adds unneeded complexity. SVAUnit could have defined its own idioms for building and running tests. It might also put off the non-UVM crowd, like people still using directed testing or formal verification engineers. Some might see UVM in there and say "for sure it's gonna be complicated, I'll just continue doing my own thing". The only good reason to use UVM (that I got from Ionuț Ciocîrlan, one of the developers, in a hallway discussion last year) was that you could plug UVCs into the testing infrastructure for the stimulus generation. In our example, if we would have had a Wishbone UVC, we wouldn't have had to write all of the signal toggles by ourselves, but just start transactions on a Wishbone driver/sequencer pair. This could be pretty useful for complicated protocols. Even so, it should be possible to integrate UVCs into non-UVM environments with a certain amount of effort. A UVM-based SVAunit only makes it easier to do so; it doesn't turn it from impossible to possible.

Another comment I have is that it's not enough to check if an assertion passed or failed. There are cases where we would want to check how many times an assertion passed or failed. This would be useful when we would want to make sure that assertions don't start multiple parallel threads.

All and all, the library is a bit rough around the edges. It wasn't easy for me as a beginner to start using it. The examples provided with the installation are way too complicated. I would have found it much better if they would have included the code from the blog post as a standalone example. This would have made it easier to follow. Once you get used to it, though, it's easy to get productive. Barring some qualms about the not-so-up-to-date documentation, setup and teardown, required boilerplate code and the heavyweight reporting (which can be customized since it's UVM), all which are things that can be easily fixed, SVAUnit is pretty solid.

For now, though, I prefer using SVUnit. I already use it to develop my classes and I don't want to fragment my unit testing across two different frameworks. I have some rudimentary VPI code that lets me get the status of assertions and I can achieve pretty much the same basic functionality as with SVAUnit. Nevertheless, if you're not doing any unit testing at all, you could start with your assertions. You probably already do a lot of ad hoc testing via forces or RTL code mutation, so why not formalize that and make it repeatable using SVAUnit?

Comments