CS/SWE 795, Fall 2017, Homework 1, due 9/5/2017 4:00pm.


For most of the semester, there will be no homework assignments in this class, and instead, just a project. Ultimately, you will choose the topic of your project, but all projects will involve Java bytecode engineering. The three homework assignments are designed to help make sure that you have the core skills that you need to complete your project.


For this homework, you will create a very simple statement coverage tool for Java. Recall that one way to evaluate the quality of test cases is to track the coverage of your code. The rationale is: if your code never is executed, it can’t be tested (that said, remember that using a simple metric, like statement coverage, doesn’t guarantee you tested it under all conditions). Your tool will output, for each test, a list of lines of code that it executed. Your tool will also output a list of lines of code that were NOT executed.

Real coverage tools tend to generate very complicated and pretty reports, like this one:

Code Coverage Report

Your tool should NOT generate a complicated report like this.

Instead, all that your tool needs to output are two simple text files, with tab separation, of the format:


Each line in the output will represent a JUnit test covering a line of code. The first field will be the name of the test class (the java class with the test). The second field will be the name of the test method, the third will be the name of the java class that is being hit, and the last field will be the line number hit.


In this file, each line in the output will represent a line of code that was not covered by any test. You only need to include lines from classes that were used at all by the tests (e.g. if there is some other class, which was never referenced at all, you do not need to report it as uncovered).

Getting Started

Academic honesty reminder: You may NOT share any of your code with anyone else. You may NOT post your code in a publicly viewable place (e.g. in a public GitHub repository).  You may face severe penalties for sharing your code, even “unintentionally.” Please review the course’s academic honesty policy.

This assignment totals 100 points, and will correspond to 10% of your final grade for this course.

Start out by importing the HW1 GitHub repository. Then, clone your private repository on your machine, and import it to your ide. You will find that the starter project is configured very similarly to our first lab assignment. You will do all of your work for this assignment in this project.

Part 1:  Identifying what test is running (15 points)

One key part of your tool is that it will attribute coverage to specific tests, rather than simply reporting all lines that were covered. Knowing which tests covered which lines of code can be useful for understanding more about what each test does.

You will determine what test is running by creating a JUnit Run Listener, which will be called by JUnit as the Maven Surefire plugin runs your tests. Find out how to attach a listener in Surefire on the surefire documentation page. Note that in this project, we are using the “failsafe” plugin to run our tests: “failsafe” is a variant of Surefire: all of the arguments on the Surefire documentation can be passed to failsafe as well. Modify the pom.xml file in your project to add the listener to failsafe, so that it will run whenever a test is run by mvn verify.

Your run listener should detect when a test starts and when a test stops.

Part 2: Recording which lines of code are hit by each test (35 points)

Use byte code instrumentation and ASM to record which lines of code are hit by each test. You can consider a line hit if the first instruction that is on that line is hit. Your instrumentation code should be contained entirely within edu.gmu.cs795.hw1.inst.LineCoverageMV class. Your runtime logger (which will interact with the Unit listener, and record which lines are hit by each test) code should be placed in edu.gmu.cs795.hw1.StatementCoverageLogger (note, I’ve provided a hook so that the dump method will be automatically called when the JVM running it shuts down).

You should use only the ASM visitor API (no tree visitor). Make sure to consider tricky cases, such as:

  • Code that is part of an initializer for a field
  • Code that is part of an initializer for a static field
  • Code that is part of a constructor
  • Code that is part of a static {} block

Part 3: Recording which lines of code were hit by no test (35 points)

Your tool will also need to output all lines of code that were included in class files used by the test, but never used. You should collect this information using ASM, again, using the same method visitor that you used in part 2.

Part 4:  Test Cases (15 points)

Write test cases to test your assignment. The simplest way to test a tool like this is to create a test suite that exercises the tool (and all of the tests ‘pass’), then have a hook that compares the output that the tool generated from those tests against a known-good output. I have provided a post-integration-test hook that will automatically scan the covered.txt and uncovered.txt files that you generate (and output) and compare them to those in the src/main/resources/integration-test/ directory. I have provided one sample test, and put the intended output in the baseline covered.txt and uncovered.txt files. Write, at a minimum, tests that show:

  • Coverage on lines in different kinds of methods (static, non-static)
  • When a line is covered by all, some, or no tests
  • When the line you are considering covering throws an exception
  • When the line you are considering covering is in an initializer, or is the initializer of a field

You can create as many new test classes/methods and supporting classes/methods as you find appropriate. Note that you will have ONE SINGLE covered.txt and uncovered.txt file that contain the coverage information for ALL of your tests (since the file format allows you to specify which test the coverage information pertains to).


Perform all of your work in your homework-1 git repository. Commit and push your assignment. Make sure that your name is specified somewhere in the README. The time that you push the code will be the time used to determine if it is on time, within the 24-hour grace period, or too late.

Make sure that your released code includes all of your files and builds properly. You can do this by clicking to download the archive, and inspecting/trying to build it on your own machine/vm. There is no need to submit binaries/jar files (the target/ directory is purposely ignored from git).

If you want to resubmit before the deadline: Simply commit and push again. I will grade the last commit that was received before the deadline. You will not be able to push after the deadline.

Reminder – Late policy: Late assignments will be accepted for 24 hours after the due date, for a penalty of 10%. After 24 hours have passed since the deadline, no late submissions will be accepted. Your release is what will be graded: if you forget to make a release, then we will assume that you chose to not submit the assignment.