SWE 622, Spring 2017, Homework 2, due 2/22/2017 4:00pm.

Introduction

Throughout the semester, you will build a sizable distributed system: a distributed in-memory filesystem. The goal of this project is to give you significant hands-on experience with building distributed systems, along with a tangible project that you can describe to future employers to demonstrate your (now considerable) distributed software engineering experience. We’ll do this in pieces, with a new assignment every two weeks that builds on the prior. You’re going to create an actual filesystem – so when you run CFS, it will mount a new filesystem on your computer (as if you are inserting a new hard disk), except that the contents of that filesystem are defined dynamically, by CFS (rather than being stored on a hard disk).

While we’re not focusing on making this a high-throughput system (for simplicity, we are implementing the filesystem driver in Java, which will inherently add some sizable performance penalty compared to implementing it in native code), you should be able to see the application of our system on the real world: often times, we build services that need to share data. If the total amount of data that we need to store is large, then we’ll probably set up some service specifically devoted to storing that data (rather than storing it all on the same machines that are performing the computation). For performance, we’ll probably want to do some caching on our clients to ensure that we don’t have to keep fetching data from the storage layer. Once we think about this caching layer a little more, it becomes somewhat more tricky in terms of managing consistency, and that’s where a lot of the fun in this project will come in. Here’s a high-level overview of the system that we are building:

Each computer that is interested in using the filesystem will run its own copy of the filesystem driver (we’ll call it the CloudFS, or CFS for short). We’ll use Dropbox’s web API to provide the permanent storage, and will implement a cache of our Dropbox contents in our cfs driver. A lock service will make sure that only a single client writes to a file at a time (HW2). Over the course of the semester, we’ll set up the cache to be distributed among multiple CFS clients, allowing them to fetch data directly from each other (HW3), distribute the lock server between the clients to increase fault tolerance (HW4), investigate fault recovery (HW5), and security and auditing (HW6).

Acknowledgements

Our CloudFS is largely inspired by the yfs lab series from MIT’s 6.824 course, developed by Robert Morris, Frans Kaashoek and the 6.824 staff.

Homework 2 Overview

We’re going to change our system model of CFS for this assignment. In this assignment, we’re going to expand CFS so that multiple CFS clients can simultaneously be running. It won’t quite be the above architecture, but perhaps more like this:

To coordinate multiple clients simultaneously working with the same underlying filesystem, you’ll introduce two new services: the Lock Server and the Cache Server. For the purposes of this assignment, we’ll assume that the lock server and the cache server can never fail (and hence, are not concerned with designing any fault tolerance into them).

The Lock Server

The goal of the lock server is to provide consistency control. In Homework 1, you considered multiple processes accessing the same files at the same time on the same instance of CFS, which you handled with locks. Now, there will be multiple CFS processes running simultaneously. What is to prevent one client from deleting a folder, while another is trying to write a file into that same folder? The lock server will expose two methods (over RMI) to the CFS clients:

There will be a single instance the LockServer running, and all clients will communicate with it using these two RMI methods.

The Cache Server

The cache server will maintain a view of the entire contents of CFS. The role of the cache server is to replace the in-process caching that you created in Homework 1 with a managed cache, that will have many more features. It also simplifies our consistency semantics at this point, because we’ll only worry about having a single cache server. No individual CFS client has to (or, should) maintain a local copy of any files or directory structure (again, departure from HW1). Instead, assume that the cache server has very low latency, and can always be communicated with – what’s the point in reinventing the wheel?

We’ll use Redis as our cache server. Your implementation of the cache server, then, will be entirely in the form of the cache client that coordinates with the cache server.

This assignment totals 80 points, and will correspond to 8% of your final grade for this course.

Getting Started

Start out by importing the HW2 GitHub repository. Then, clone your private repository on your machine, and import it to your ide.

Academic honesty reminder: You may NOT share any of your code with anyone else. You may NOT post your code in a publicly viewable place (e.g. in a public GitHub repository).  We are providing you with your own private git repository (using the link above) to host your code securely. You may face severe penalties for sharing your code, even “unintentionally.” Please review the course’s academic honesty policy.

You’ll notice there are now several projects in the repository:

  • fusedriver – Your CFS driver, just like in HW1
  • lock-shared – Shared RMI stubs used by the lock server and the CFS client
  • lock-server – The Lock server code

This layout should be somewhat familiar to you from Lab 1. The project configuration files have been setup in a similar way, so that fusedriver can see the classes in lock-shared, and lock-server can see the files in lock-shared too. I’ve also added stub code to demonstrate the RMI connection and the Redis connection in RedisCacheProvider. You can find documentation on Redis on their website, or documentation on the Java connector, Jedis, on their website.

You’ll also find a Vagrantfile in the repository, similar to the configuration used for HW1, but including Redis. The first time you start it, it will install the correct version of Redis. If it fails for any reason (for instance, network connectivity issues), try to run vagrant provision to have it re-try.

VM warning: All of the VMs I’m distributing forward several ports from your local machine into the VM for ease of use/debugging (for instance, 5005 is used for remotely debugging stuck Maven test processes). This means that you can NOT run multiple VMs from this class at the same time, or you’ll get an error. If you still have your HW1 (or Lab 2) VM running when you try to start this one, you’ll get an error. To resolve, run ‘vagrant suspend’ in the other VM directory, and then you’ll be able to start this one.

At a high level, your tasks are to implement the Lock Server (in the edu.gmu.swe622.cfs.LockServer class) + Lock Client (in the edu.gmu.swe622.cloud.RedisCacheProvider class). You will implement the cache client in the edu.gmu.swe622.cloud.RedisCacheProvider class as well. Additional details are provided below.

Tips

You should copy your config.properties over from HW1.

You can run the lock server in the VM, by running:

And, you can run the CFS client:

(Note the need to add the -mnt option before specifying the mount point, unlike in HW1)
The first time you run it, you’ll need to re-authorize it for Dropbox (the config files that authorized it before were local to the HW1 VM).

If you want to remove what’s stored in Redis, the easiest way is using the FLUSHALL command in redis-cli:

Debugging remote JVMs

You can launch a Java process and have it wait for your Eclipse debugger to connect to it by adding this to the command line:

If you’re doing this from within the Vagrant VM, then also update your Vagrantfile to uncomment this line by removing the # at the start of the line:

After you make this change, disconnect from the VM if you are connected, then do “vagrant reload” before “vagrant ssh” to apply your changes.

Then, in Eclipse, select Debug… Debug configurations, then “Remote Java Application”, then “create new debug configuration.” Select the appropriate project, then “Connection Type” => “Standard (socket attach)”, “Host” => “localhost” and “port” => 5005 (if you are NOT using Vagrant) or 5006 (if you ARE using Vagrant).

 

Part 1: Implement the cache client (25 points)

As described above, your cache will now be maintained by Redis. Recall that Redis allows you to store not just opaque strings/byte arrays, but also data structures. Ultimately, how you decide to represent your cached filesystem in Redis is up to you. However, we provide a structure here – you are free to ignore our suggestions entirely, or deviate from them – if you do so, please add some description in your README.md file describing the structure that you ended up implementing and your design rationale.

Cache structure

The simplest way to represent your filesystem is to simply use the path of a file as a key (since every file will have a unique path), and have that key point to a map. For directories, you can store the contents of the directory as a list. But what key would you use to access those contents? You should make sure that the generated key to store directory contents can never collide with another directory’s listing, or with a valid path that could be otherwise stored. For instance, if you chose to store the directory contents for a directory / at /.CFSDirectoryContents, then you would prevent the user from ever creating a file called .CFSDirectoryContents. One solution would be to generate a random key to place the directory contents and store that key in the metadata of the directory. However, this approach might be slow: generating random keys is cheap but not free, and if you ever end up generating the same random key again, it gets nasty (you’ll need to double check that you made a new key every time, then make a new one if necessary, and make sure that that all happens in a critical section…).

Instead, here’s a clever work-around for storing directory contents: All paths on the filesystem must begin with /. Metadata for each path will be stored using the path itself as a key. Hence, all keys for metadata will begin with the character /. Hence, you can create a unique key to store the contents of a directory /foo at contents:/foo. This will exploit the semantic that paths will always be unique (which you’re handling already), and that paths all start with /.

The contents of the map can closely mirror that of CloudFile:

  • isDirectory (boolean)
  • size (long)
  • mTime (long)
  • contents (byte array)

Thread safety

You’ll need to deal with two concurrency risks here: (1) internal consistency risks from multiple threads in the same CFS instance accessing the same Jedis client simultaneously (regardless of if they are accessing the same path) and (2) external consistency risks from multiple clients accessing the same path (or path parent) at the same time. To avoid (1), make sure to use the JedisPool (as we did in lab2), always acquiring and closing a Jedis client in each CloudProvider method (e.g. get, put, etc.). We’ll handle consistency risks from multiple clients simultaneously accessing the same path in parts 2 and 3.

Implementation

Implement your Redis filesystem cache in RedisCacheProvider. As in Homework 1, you are free to place your implementation across multiple methods/classes, but you may not modify CloudFS or CloudProvider in any way.

A successful implementation of part 1 will be functionally equivalent to a correct implementation of part 2 of Homework 1 (the in-Java memory cache) – every GET operation will be cached locally so subsequent calls do not require a call to go to Dropbox, and all modification operations (e.g. PUT, mkdir, etc.) will be cached locally as well, so subsequent GETs to the affected path(s) can occur without needing to reach out to Dropbox. A successful implementation of part 1 need not concern itself with path locking.

Part 2: Implement the lock server (30 points)

Now that we will support multiple CFS clients simultaneously accessing the same filesystem, we need to create a coordination mechanism to provide the same concurrency semantics that we guaranteed in Homework 1 using local locks. The most straightforward solution will be to create a lock server that manages locks for the entire filesystem. For this assignment, we’re going to make the following assumptions:

  • CFS clients don’t crash
  • The lock server doesn’t crash
  • Network transmissions are delivered within a “reasonable” time, in order

The lock server will expose the following interface (via RMI) to CFS clients:

The implementation of acquireLock and releaseLock is up to you. As in Homework 1, you should maintain a single ReadWriteLock for every path, but in this case, we will require you to NOT use a reentrant lock (more on why non-reentrant below). Acquiring a lock on one path requires acquiring a lock on all of that path’s parents. Write locks are required when a client is writing to a path, and in those cases, are only write-locked on the deepest part of the path. For example, if a client calls acquireLock("/folder/other/file",true), then the LockServer should attempt to acquire the following locks:

  • / (READ)
  • /folder (READ)
  • /folder/other (READ)
  • /folder/other/file (WRITE)

Note that it’s not necessary for the server to grant a write lock on /, /folder or /folder/other, because the only thing being modified is /folder/other/file. Note also the importance of acquiring locks in this order: If the lock server attempted to acquire them in the reverse order, then a deadlock could be possible.

acquireLock should not return execution to the client until it successfully can acquire all of the locks requested.

You should not use Reentrant locks in the lock server. The rationale is: it gets really really complicated to implement this on the server. Since multiple calls to acquireLock from the same client might occur on different threads execution on your lock server, and multiple clients might be handled by the same thread, simply using a ReentrantLock does not  work. Instead, you’d need to track which client has which lock at which time, and this is nasty (definitely doable, but adds complexity). Hence, your client will track which locks it has at any given time, and will be smart enough to remember what locks it already has. You can assume that a client will never call unlock, unless it is the one who owns the lock.

Part 3: Implement the lock client (25 points)

Now that you have a functioning lock server, you can implement the locking semantics on each client (using the same semantics described above, and in HW1). Your client will need to maintain a list of which processes accessing the file system (on that local client) have already successfully acquired which locks since the server will not be able to track this.

Moreover, you’ll need to be more careful than in HW1 about tracking requests to access a file from the same/multiple processes. In the context of HW1, we allowed you to simply create a ReentrantLock for each CloudPath, and then acquire and release those as needed. Note the following clarifications about CloudPath objects:

  • If multiple clients result in openFile("/foo/bar/") being invoked, they will each have a unique CloudFile created (hence, simply using a lock on that object is incorrect).
  • If a single client calls openFile, then get or set, it is not necessarily the case that they will be processed in the same thread of execution.

The implications are that you will need to track lock-ownership by CloudPath instance. If your CFS client receives a request for a lock on a CloudPath, you’ll need to see (1) if that caller already has a lock on it (noting that you can uniquely identify the caller by its CloudPath instance), and if not (2) call the lock server to get the lock, then cache the fact that that caller owns the lock.

Submission

Perform all of your work in your homework-2 git repository. Commit and push your assignment. Once you are ready to submit, create a release, tagged “hw2.” Unless you want to submit a different version of your code, leave it at “master” to release the most recent code that you’ve pushed to GitHub. Make sure that your name is specified somewhere in the release notes. The time that you create your release will be the time used to judge that you have submitted by the deadline. This is NOT the time that you push your code.

Make sure that your released code includes all of your files and builds properly. You can do this by clicking to download the archive, and inspecting/trying to build it on your own machine/vm. There is no need to submit binaries/jar files (the target/ directory is purposely ignored from git). It is accepted (and normal) to NOT include the config.properties file in your release.

If you want to resubmit before the deadline: Simply create a new release, we will automatically use the last release you published. If you submitted before the deadline, then decide to make a new release within 24 hours of the deadline, we will automatically grade your late submission, and subtract 10%. Any releases created more than 24 hours past the deadline will be ignored.

Reminder – Late policy: Late assignments will be accepted for 24 hours after the due date, for a penalty of 10%. After 24 hours have passed since the deadline, no late submissions will be accepted. Your release is what will be graded: if you forget to make a release, then we will assume that you chose to not submit the assignment.

Contact