Testing UVM Drivers, Part 2

In the previous post we looked at how we can emulate sequencer/driver communication using a lightweight stub of uvm_sequencer. Let's also look at some more tips and tricks I've picked up while writing unit tests for drivers. To mix things up a bit, let's look at the AXI protocol. We're not going to implement a full featured driver; instead, we'll focus on the write channels:

interface vgm_axi_interface(input bit ACLK, input bit ARESETn);
logic [3:0] AWID;
logic [31:0] AWADDR;
logic [3:0] AWLEN;
logic AWVALID;
logic AWREADY;

logic [3:0] WID;
logic [31:0] WDATA;
logic WLAST;
logic WVALID;
logic WREADY;

logic [3:0] BID;
logic [1:0] BRESP;
logic BVALID;
logic BREADY;
endinterface

Our sequence item will model the properties of a write transaction:

typedef enum bit [3:0] { LENGTH_[1:16] } length_e;


class sequence_item extends uvm_sequence_item;
rand bit [3:0] id;
rand bit [31:0] address;
rand length_e length;
rand transfer transfers[];
rand int unsigned delay;

// ...
endclass


class transfer extends uvm_sequence_item;
rand bit[31:0] data;
rand int unsigned delay;

// ...
endclass

The driver will consist of the familiar get and drive loop:

class master_driver extends uvm_driver #(sequence_item);
virtual vgm_axi_interface intf;

virtual protected task get_and_drive();
forever begin
seq_item_port.get_next_item(req);
drive();
seq_item_port.item_done();
end
endtask

// ...
endclass

We'll implement the driver based on the following unit tests. Have a quick look at them before going further:

    `SVTEST(drive_awvalid__with_delay)
sequence_item item = new("item");
`FAIL_UNLESS(item.randomize() with {
delay == 3;
})
sequencer.add_item(item);

repeat (3)
@(posedge clk) `FAIL_UNLESS(intf.AWVALID === 0)
@(posedge clk) `FAIL_UNLESS(intf.AWVALID === 1)
`SVTEST_END


`SVTEST(drive_awvalid__held_until_hready)
sequence_item item = new("item");
`FAIL_UNLESS(item.randomize() with {
delay == 0;
})
sequencer.add_item(item);
intf.AWREADY <= 0;

repeat (4)
@(posedge clk) `FAIL_UNLESS(intf.AWVALID === 1)

intf.AWREADY <= 1;
@(posedge clk) `FAIL_UNLESS(intf.AWVALID === 1)
@(posedge clk) `FAIL_UNLESS(intf.AWVALID === 0)
`SVTEST_END


`SVTEST(drive_write_addr_channel)
sequence_item item = new("item");
`FAIL_UNLESS(item.randomize() with {
id == 5;
address == 'h1122_3344;
length == LENGTH_14;
delay == 0;
})
sequencer.add_item(item);

@(posedge clk);
`FAIL_UNLESS(intf.AWID == 5)
`FAIL_UNLESS(intf.AWADDR == 'h1122_3344)
`FAIL_UNLESS(intf.AWLEN == 'b1101)
`SVTEST_END


`SVTEST(drive_wvalid__with_delay)
sequence_item item = new("item");
`FAIL_UNLESS(item.randomize() with {
delay == 0;
length == LENGTH_4;
transfers[0].delay == 1;
transfers[1].delay == 3;
transfers[2].delay == 2;
transfers[3].delay == 0;
})
sequencer.add_item(item);

// Skip over address phase
@(posedge clk);

@(posedge clk) `FAIL_UNLESS(intf.WVALID === 0)
@(posedge clk) `FAIL_UNLESS(intf.WVALID === 1)

repeat (3)
@(posedge clk) `FAIL_UNLESS(intf.WVALID === 0)
@(posedge clk) `FAIL_UNLESS(intf.WVALID === 1)

repeat (2)
@(posedge clk) `FAIL_UNLESS(intf.WVALID === 0)
@(posedge clk) `FAIL_UNLESS(intf.WVALID === 1)

@(posedge clk) `FAIL_UNLESS(intf.WVALID === 1)
`SVTEST_END


`SVTEST(drive_write_data_channel)
sequence_item item = new("item");
`FAIL_UNLESS(item.randomize() with {
delay == 0;
length == LENGTH_4;
transfers[0].data == 'hffff_ffff;
transfers[1].data == 'h0000_0000;
transfers[2].data == 'haaaa_aaaa;
transfers[3].data == 'h5555_5555;
transfers[0].delay == 0;
transfers[1].delay == 0;
transfers[2].delay == 0;
transfers[3].delay == 0;
})
sequencer.add_item(item);

// Skip over address phase
@(posedge clk);

@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'hffff_ffff)
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h0000_0000)
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'haaaa_aaaa)
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h5555_5555)
`SVTEST_END


`SVTEST(drive_write_data_channel__data_held_until_wready)
sequence_item item = new("item");
`FAIL_UNLESS(item.randomize() with {
delay == 0;
length == LENGTH_4;
transfers[0].data == 'hffff_ffff;
transfers[1].data == 'h0000_0000;
transfers[2].data == 'haaaa_aaaa;
transfers[3].data == 'h5555_5555;
transfers[0].delay == 0;
transfers[1].delay == 0;
transfers[2].delay == 0;
transfers[3].delay == 0;
})
sequencer.add_item(item);

// Skip over address phase
@(posedge clk);

intf.WREADY <= 0;
repeat (3)
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'hffff_ffff)
intf.WREADY <= 1;
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'hffff_ffff)

intf.WREADY <= 0;
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h0000_0000)
intf.WREADY <= 1;
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h0000_0000)

intf.WREADY <= 0;
repeat (2)
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'haaaa_aaaa)
intf.WREADY <= 1;
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'haaaa_aaaa)

intf.WREADY <= 0;
repeat (4)
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h5555_5555)
intf.WREADY <= 1;
@(posedge clk) `FAIL_UNLESS(intf.WDATA === 'h5555_5555)
`SVTEST_END


`SVTEST(drive_write_data_channel__wlast_driven_for_last_transfer)
sequence_item item = new("item");
`FAIL_UNLESS(item.randomize() with {
delay == 0;
length == LENGTH_8;
foreach (transfers[i])
transfers[i].delay == 0;
})
sequencer.add_item(item);

// Skip over address phase
@(posedge clk);

repeat (7)
@(posedge clk) `FAIL_IF(intf.WLAST)
@(posedge clk) `FAIL_UNLESS(intf.WLAST)
`SVTEST_END

The tests above cover the desired functionality that we want to implement in our driver. We won't go through each and every one of them and see what production code we need to write, since the actual implementation of the driver isn't really important for this post. We want to focus more on the tests themselves.

When first confronted with these tests, a new developer won't have an easy time understanding what's going on. The tests are pretty verbose and it's not immediately clear what the focus of each one is. Let's see how we could improve this.

First, we'll notice that the one thing we do in each test is to create an item and queue it for the driver. We use randomization to make sure that the item is "consistent", meaning that the constraints defined in it hold. We could set item variables procedurally, but we would need to ensure that the length and the number of transfers match, which would mean even more code. Instead of repeating these steps in each unit test, we could centralize them into one place. Since we use randomization, we can't extract a function. We're forced to use a macro:

  `define add_item_with(CONSTRAINTS) \
sequence_item item = new("item"); \
`FAIL_UNLESS(item.randomize() with CONSTRAINTS; \
sequencer.add_item(item);

This is going to save us some lines of code and it's going to make the intent of the code more obvious. What will differ between unit tests are the constraints they use when adding an item:

    `SVTEST(drive_awvalid__with_delay)
`add_item_with({
delay == 3;
})

// ...
`SVTEST_END


`SVTEST(drive_awvalid__held_until_hready)
`add_item_with({
delay == 0;
})

// ...
`SVTEST_END


`SVTEST(drive_write_addr_channel)
`add_item_with({
id == 5;
address == 'h1122_3344;
length == LENGTH_14;
delay == 0;
})

// ...
`SVTEST_END

We can even go one step further. If we look at the tests again, we'll notice that we use the same constraint over and over again to get an item without any delay. Also, whenever we want to check some write data channel aspects, we want to constrain the delay of each transfer to be zero. We do have tests where we want non-zero delays, but they are the exceptional cases. What we could do is to add some default values to the delay variables via soft constraints. This way, whenever we use the `add_item_with(...) macro we know that we'll get an item without any delay:

  `define add_item_with(CONSTRAINTS) \
sequence_item item = new("item"); \
`FAIL_UNLESS(item.randomize() with { \
soft delay == 0; \
foreach (transfers[i]) \
soft transfers[i].delay == 0; \
\
if (1) \
CONSTRAINTS \
}) \
sequencer.add_item(item);

This way, we can write even more compact code to add items. Here are the two data tests I mentioned:

    `SVTEST(drive_write_data_channel)
`add_item_with({
length == LENGTH_4;
transfers[0].data == 'hffff_ffff;
transfers[1].data == 'h0000_0000;
transfers[2].data == 'haaaa_aaaa;
transfers[3].data == 'h5555_5555;
})

// ...
`SVTEST_END


`SVTEST(drive_write_data_channel__data_held_until_wready)
`add_item_with({
length == LENGTH_4;
transfers[0].data == 'hffff_ffff;
transfers[1].data == 'h0000_0000;
transfers[2].data == 'haaaa_aaaa;
transfers[3].data == 'h5555_5555;
})

// ...
`SVTEST_END

If we compare them with the initial version from the beginning of the post, we'll see that they are much more compact. It's also clearer that the handling of the data variables are what we're testing and not anything related to delays.

Since we want to check the behavior of signals at certain points in time, we need to do a lot of waits. The statement @(posedge clk) comes up a lot in our unit tests. We could shorten this by using a default clocking block:

module master_driver_unit_test;
// ...

default clocking @(posedge clk);
endclocking
endmodule

Now, instead of using the verbose @(posedge clk) statement, we can use the cycle delay operator:

    `SVTEST(drive_wvalid__with_delay)
// ...

// Skip over address phase
##1;

##1 `FAIL_UNLESS(intf.WVALID === 0)
##1 `FAIL_UNLESS(intf.WVALID === 1)

repeat (3)
##1 `FAIL_UNLESS(intf.WVALID === 0)
##1 `FAIL_UNLESS(intf.WVALID === 1)

repeat (2)
##1 `FAIL_UNLESS(intf.WVALID === 0)
##1 `FAIL_UNLESS(intf.WVALID === 1)

##1 `FAIL_UNLESS(intf.WVALID === 1)
`SVTEST_END

Having less words to parse makes the test's intent clearer. Using a default clocking block is an option most of the time, but if you have a more exotic protocol that uses both edges of the clock or multiple clocks altogether, it's not going to work.

One thing you may have noticed is that I've marked some wait statements with comments. If you've read Clean Code (and if you haven't you should), you'll call me out on this. Uncle Bob says that comments are a crutch for poorly written code that doesn't express its intent properly. Instead of relying on comments, we could create a named task:

  task wait_addr_phase_ended();
##1;
endtask

Now, when a test calls this task, it'll be immediately apparent what the intention is:

    `SVTEST(drive_write_data_channel)
`add_item_with({
// ...
})

wait_addr_phase_ended();

// ...
`SVTEST_END

There is a mismatch between what the task does and its name. The task actually just waits for one cycle. To reflect this, it should have been named wait_cycle(). A task call like this will take us back to square one in terms of test readability. We may as well just use the ##1 statement as that tells us the same thing. If we want to solve this mismatch between the task name and its implementation, we should change the latter. In the context of our tests, we knew that the address phase was going to end after one clock cycle. Generally, though, What we want is to wait for AWVALID and AWREADY to be high at the same time:

  task wait_addr_phase_ended();
@(posedge clk iff intf.AWVALID && intf.AWREADY);
endtask

Some unit-testing purists might bash this method because it adds complexity to the testing logic. This is something we should try to avoid, since we don't want to have to start testing our tests. The task is lean enough that I'd say this point doesn't apply here.

While we may have streamlined our test preparation code, checking that our expectations are fulfilled is ridiculously long. Procedural code isn't really well suited to check behavior over time. You know what is though? Assertions... Instead of writing a big mess of procedural code that does repeats and cycle delays, we could write a nice property and check that it holds. When people hear the word property, they normally think of concurrent assertions, but this isn't really what we want here. What we want to do is to check that a certain property holds after a certain point in time, not during the whole simulation.

Luckily, SystemVerilog provides the expect construct, which does exactly what we want. Given a property, it will begin evaluating it starting with the first subsequent clock. For example, to check that the driver can drive data transfers with delays, we could write the following:

    `SVTEST(drive_wvalid__with_delay)
// ...

expect (
intf.WVALID === 0
##1 intf.WVALID === 1
##1 intf.WVALID === 0 [*3]
##1 intf.WVALID === 1
##1 intf.WVALID === 0 [*2]
##1 intf.WVALID === 1
##1 intf.WVALID === 1)
else
`FAIL_IF(1)
`SVTEST_END

This is much cleaner than the procedural code we had before. It also allows us to structure our unit tests according to the Arrange, Act, Assert pattern (even though most of the time for drivers Arrange and Act get a bit mixed, but at least Assert is separated clearly).

Since expecting that properties hold is something we'll want to do over and over again, let's wrap it in a utility macro and make it part of the vgm_svunit_utils package:

`define FAIL_UNLESS_PROP(prop) \
expect (prop) \
else \
`FAIL_IF(1)

Using this macro will give the unit tests a more consistent SVUnit look and feel:

    `SVTEST(drive_wvalid__with_delay)
// ...

`FAIL_UNLESS_PROP(
intf.WVALID === 0
##1 intf.WVALID === 1
##1 intf.WVALID === 0 [*3]
// ...
)
`SVTEST_END

As we saw in the previous post, it's very important to use the === (4-state equality) operator instead of ==, otherwise we're writing tests that always pass. I intentionally wrote the drive_write_addr_channel buggy to show this (kudos to anyone who noticed). Also, we need to watch out for hidden comparisons. In the last test we didn't even use the equality operator:

    `SVTEST(drive_write_data_channel__wlast_driven_for_last_transfer)
// ...

repeat (7)
@(posedge clk) `FAIL_IF(intf.WLAST)
@(posedge clk) `FAIL_UNLESS(intf.WLAST)
`SVTEST_END

Because of the way the `FAIL_* macros are written, they will both always pass, so the test isn't really doing anything. If we were to re-write them using properties, we would notice if WLAST isn't being driven:

    `SVTEST(drive_write_data_channel__wlast_driven_for_last_transfer)
// ...

`FAIL_UNLESS_PROP(
!intf.WLAST [*7]
##1 intf.WLAST)
`SVTEST_END

If WLAST were to be X, then the negation would also return X, which would be interpreted as a 0, leading to a fail of the property. For single bit signals, using properties is much safer than comparing for equality. There's also the added bonus that the code is more compact. For vectors, though, we still need to make sure that we're using the === operator.

Another cool thing that properties allow us to do is to focus more on the aspects that are important for a test. For example, in the data_held_until_ready test we want to check that the driver doesn't modify the value of WDATA until it sees a corresponding WREADY. We did this by choosing known values for the data transfers and checking that these values stay on the bus for the appropriate number of clock cycles. Actually, we don't (or shouldn't) care what data is being driven, as long as it stays constant. We could simplify the code to the following:

    `SVTEST(drive_write_data_channel__data_held_until_wready)
`add_item_with({
length == LENGTH_4;
})

// ...

`FAIL_UNLESS_PROP(
stable_for(intf.WDATA, 3)
##1 stable_for(intf.WDATA, 1)
##1 stable_for(intf.WDATA, 2)
##1 stable_for(intf.WDATA, 4))
`SVTEST_END

The stable_for(...) sequence only exists because my simulator complained about using $past(...) inside an expect statement. It's content could have been inlined to the property:

  sequence stable_for(signal, int unsigned num_cycles);
##1 ($stable(signal) [*num_cycles]);
endsequence

We don't really need to use a property to check that the signal stays stable. As per the LRM we could use the $stable(...) system function inside procedural code as well, but some tools don't allow this, limiting its use to assertions.

While it's usually frowned upon to have randomness in unit tests, I would argue that this isn't necessarily a problem here. Randomness should be avoided because it causes us to write complicated code for our expectations. As long as we don't need to do this (i.e. the checking code stays the same regardless of what we plug into it), everything should be fine. There is one caveat, though. On the off chance that two consecutive transfers contain the same data, it won't be possible to figure out whether the driver moved to the second transfer too early. In the extreme case, all transfers could have the same data value, making it impossible to check that the driver really holds transfers constant for their entire durations. This could be solved by writing a constraint that all data values should be unique.

Last, but not least, I hinted in the previous post that a driver not only drives items from the sequencer. It's also supposed to request items from the sequencer at well defined points in time. For example, a driver should call get_next_item(...) once it's able to process a new item, but not before, to allow the running sequence to randomize items at the latest possible time (so called late randomization). This is helpful when sequences use the current state of the system to decide what to do next. For simple protocols this is easy: a new item can start exactly after the previous one has finished. For pipelined protocols, though, it's a not as easy. The AXI protocol is massively pipelined and can have a lot of ongoing transactions at any time. I don't want to have to think what a sensible definition for an available slot for a new item would be, because the scheme would probably be too complicated. I do however want to show how we could verify that a driver calls sequencer methods at defined times.

We'll take a contrived example of how to handle responses, since I couldn't think of anything better. Whether we're checking that put_response(...) or get_next_item(...) was called when expected doesn't really matter, so this example should be enough to prove a point. Let's say that when a response comes on the write response channel, the driver is supposed to let the sequencer know.

We'll have another sequence item for the response:

typedef enum { OKAY, EXOKAY, SLVERR, DECERR } response_kind_e;

class response extends uvm_sequence_item;
bit [3:0] id;
response_kind_e kind;

// ...
endclass

First of all, we'll want to check that the response contains the proper values:

    `SVTEST(put_response_when_ready)
response rsp;

intf.BID <= 5;
intf.BVALID <= 1;
##1;

intf.BREADY <= 1;
##1;

uvm_wait_for_nba_region();
`FAIL_UNLESS(sequencer.try_get_rsp(rsp))
`FAIL_UNLESS(rsp.id == 5)
`SVTEST_END

The driver waits for a response to be seen on the bus and calls put_response(...) when this is the case:

class master_driver extends uvm_driver #(sequence_item, response);
// ..

virtual protected task collect_and_put();
forever begin
collect();
seq_item_port.put_response(rsp);
end
endtask

virtual protected task collect();
@(intf.ACLK iff intf.BVALID);
rsp = response::type_id::create("rsp");
rsp.id = intf.BID;
endtask
endclass

The test is going to pass with this implementation. The problem is, though, that the response is considered to be done only once it has also been accepted, i.e. BREADY goes high. Unfortunately, the previous code sends responses too early. We'll need to update our unit test to check that put_response(...) was called after BREADY was also high and that it was called only once. This is where the test diagnostic information that sequencer_stub provides is going to be useful:

    `SVTEST(put_response_when_ready)
// ...

intf.BREADY <= 1;
##1;

`FAIL_UNLESS_TRIGGERED(sequencer.put_response_called)
`FAIL_UNLESS(sequencer.num_put_response_calls == 1)

// ...
`SVTEST_END

The sequencer_stub class contains a named event put_response_called that, as the name suggests, is triggered when put_reponse(...) is called by the driver. The `FAIL_UNLESS_TRIGGERED(...) macro is part of the vgm_svunit_utils package. It wraps the code required to check that an event was triggered in the current time step:

`define FAIL_UNLESS_TRIGGERED(ev) \
fork \
wait (ev.triggered); \
\
begin \
#1; \
`FAIL_IF_LOG(1, `"'ev' not triggered`") \
end \
join_any \
disable fork;

The wait statement checks if the event was already triggered (because the driver code could have executed first) and if it wasn't, it blocks. If time moves forward and the event didn't get triggered, an error message is triggered. Since the test will now fail, we'll need to update the driver code (not shown).

We've covered a lot of ground in this post on how to write more comprehensive, readable and maintainable unit tests for UVM drivers. You can find the example code here. I hope this helps you be more productive when developing your own drivers.

Comments