Microservices – Part 1, what are they?

We’re getting a lot of requests lately to produce a Microservices course. It’s been high up on our (growing) todo list so it will definitely happen, but for some reason I’ve held off. I think this might be because we’ve tended to favour courses based on specific tools and concrete techniques rather than high level architectural stuff.
But we’re definitely doing it – possibly as a standalone course, possibly spread across multiple videos (the upcoming courses on Messaging in Spring Boot and Wildfly would both be good candidates to contain some microservices, and our series on DevOps would be a good place to cover how to deploy microservices). Or maybe we’ll do both, a short course addressing the overall ideas and then we’ll apply those ideas in the appropriate courses.
This will happen over the coming months, in the meantime here’s a bit of an unstructured exploration of Microservices.

What are Microservices?

Simply put, the Microservice movement is a shift away from the old-school technique of building huge, single applications that have cross business scope (these are commonly known as Monoliths).
As an honest example, our website at VirtualPairProgrammers has been developed as a traditional monolith – a single WAR file containing, effectively, our entire business.
Ok, we’re not a huge business. We’re not Spotify (yet). It’s just a simple shopping cart site I hear you cry! Well, there’s a lot more going on behind the web interface:

  • Video production (rendering pipelines)
  • Video Subtitling
  • Newsletter Production
  • Customer Lists
  • Sales Data
  • Viewing Figures
  • System Administration
  • Support and Ticketing
  • International Currencies
  • International Business Rules (eg VAT, tax rates)
  • The usual CMS stuff (content management)
  • Standard eCommerce/Shopping Cart
  • Affiliate Management
  • Subscriptions billing and re-billing

Probably more. Our marketing manager loves nothing more than shoving in hideous cartesian join SQLs and exports to Excel, because – that’s what marketing managers do. It annoys the purist developers who think that their Hadoop based Viewing figures code is so beautiful it should be framed and put in an art gallery – but that’s a consequence of running a Monolith – we have different business areas with very different needs treading on one another’s toes.
We’re not ashamed of this monolith. It made absolute sense in our early days to deploy this as a single WAR file – it was the simplest thing that could possibly work, and work it did, for many years. But today, there’s so much complexity in there, it’s becoming more brittle over time. A simple tiny change (like a change in the VAT rate for a country) means a full rebuild (taking around 5 minutes) and then a complete reboot of the Java Server (Tomcat in our case).
A move to a Microservice based architecture would see us deploy multiple, small or tiny applications, each aligned to a specific area of the business.
In Part 2 of this series (next week), I’ll describe our first steps in migrating this monolith to Microservices. In this blog, I’ll talk about the general principles we should be adhering to.
In one sense, there’s nothing exciting in Microservices – it’s just good software engineering principles.

Loose Coupling / High Cohesion

At the core of Microservices is the same principle that is at the heart of any good software design. Loose coupling in this context means that a single microservice must do ONE thing, and do that ONE thing well. What “ONE thing” means is vague and is down to judgement, but key to a microservice is that it should be aligned to a specific area of the business. I’ve said that once already, but it’s of utmost importance. It’s absolutely no use making your Microservices align along tiers – if you have a “Web” microservice and a “Middle Tier” microservice and a “Database” microservice, then you don’t have microservices at all – you’ve got three monoliths with an enormous amount of coupling between them. Which brings me to…
…loose coupling, meaning that the dependencies between the services should be as minimal as possible. In development terms, a change to one microservice should have minimal impact on the other microservices in the overall system. In run-time operations terms, ideally, we should be able to take down an entire microservice with no degradation of the performance of the overall system.

Code Repository Isolation

So at one level a microservice is a simple expression of good software engineering, but it leads to difficult architectural choices. If your system is now a hundred microservices, should you have a source code repository that contains all of the microservices (with the services being in subfolders)? Or would you have to maintain a hundred seprate git/mercurial repositiories?
The answer is separate respositories – if you go with one huge repository, the temptation will be to start doing mega-builds and this will lead to “lock step” deployments where all 100 microservices are deployed at the same time – you therefore have much of the unpleasantness of a monolith. There are in fact many successful Microservice projects that do keep single repositories, this is something of an “ideal” goal, and it’s probably not a killer – but it makes sense that services which are deployed independently should also be developed independently.
In a similar vein, is it ok to deploy all of your microservices to a single server/VM instance/EC2 Instance/Azure Thingy?

Service Isolation

Ideally a microservice should be deployed onto it’s own standalone “instance”. Many projects do deploy multiple their microservices to a single machine, but again, this can lead to the temptation of coupling them together, leading again to “lock step” deployment.
This can be expensive, which is where container services such as Docker step in. A container can be thought of as much lighter than a Virtual Machine – a single VM can host multiple containers, each container responsible for a single microservice.

We love Docker at VirtualPairProgrammers, and yes, we will be doing a course on it!

Automate, Automate, Automate

You might be able to put up with the pain of manually deploying a monolith. Or manually spinning up a few Cloud Instances to host it. Or manually installing software onto those instances. Some people like pain, especially if there’s a bit of drama involved. Scaling that up to 100 deployments, 1000 instances, forget it. Microservices absolutely depend upon the automation of deployment, of provisioning, of configuration management. Continuous Delivery (http://martinfowler.com/bliki/ContinuousDelivery.html) is a prerequisite.

 

No Integration Databases

This is my favourite principle of Microservices – there should be no Integration Databases – avoid them at all costs. (For a recap of integration vs application databases, you can watch a chapter from our NoSQL course here).
There will be much wailing and gnashing of teeth over this – the integration database is the most precious possession of many businesses. But a database into which anything can delve in, read and write at will, is both incohesive (ie it captures many different parts of the business, by definition) AND it is tightly (not loosely) coupled (again by definition, as many disparate applications DEPEND upon it).
So it’s unarguable really, that integration databases have no part in microservices, it’s part of the definition. However, in the real world, expect to see many projects proudly proclaim that they use microservices, of which one “micro”service is the “database service”. Which you can’t change without the permission of the DBA. Oh look, there’s a “Business Logic” Microservice!
These are just my unstructured thoughts about the main ingredients of a Microservice – in part 2 (next week), I’ll describe a concrete example of how we at VirtualPairProgrammers are slowly migrating our IT across to a Microservice architecture.

 

How (Not) to Design a REST API

On my last project I had to integrate our code with an external REST provider. The provider was a banking service (I’ll call them “TheBank” to protect their identity) and we had to record financial transactions with them.

Check out the API documentation that we had to work with here (note: I’ve completely changed the API terminology so that the actual provider can’t be identified, but the structure and the errors are still the same).

A good Java interview question would be – what have they done wrong in this REST API design? Have a read of the docs before reading on, see if you can come up with a list of what could be improved.

 

(Plug: our JavaEE with Wildfly and Spring Remoting courses both explore good REST API design.)

Ok, you’re back and hopefully you’re face-palming. There’s a quick answer – it’s all wrong. I can’t find a single good decision in that entire API. Good going, TheBank.

Designing APIs is hard, admittedly, but whoever put this together hasn’t even grasped the fundamentals of REST (or HTTP). But this isn’t an isolated case – I’ve lost count of how many times I’ve had to integrate with similarly broken APIs – in fact it would be much quicker to count the ones which *are* well implemented (the figure is not far north of “zero”). This is probably the worst I’ve seen.

I’m not a REST zealot by any means, in fact on our course I openly admit that HATEOAS is a bit of a lofty goal and it’s no disgrace if you don’t go that far. I don’t care about purity or satisfying some aesthetic goal. What I do care about is wasted time and development effort, and I care if I’m forced to write brittle and error prone code.

So let’s run through TheBank’s blunders and see why it matters:

No URIs or Representations

Leaving aside the dodgy looking “endpoint.shtml” (what does this even mean? SHTML was a server side include, some kind of Apache extension. Why do I as the client care about this?), they are routing every single API call in through a single URI. Thus they are immediately losing the expressiveness of URIs. The URIs *are* the API.

So rather than an API, we have a single method with a huge telescoping list of query parameters.

Even though they call their API “RESTful”, there’s no trace of any kind of representation. This means that all the data for every call has to be converted into a long series of query params, leading to the very ugly and unreadable construction.

[There’s nothing wrong with query params – we use them on our REST courses. But only for constraints or extra information that doesn’t belong in the representations. Example – if you only want the first 20 records in a query, then this would make a good query param.]

Why this matters: if done properly, I could have quickly coded up a “Transaction” class in my client and let my framework (I was using Spring) to convert to JSON. Instead I had to spend time string concatenating, always an error prone and tedious process.

Invalid use of GET. No use of HTTP verbs.

GET, by definition, is for “safe” and “idempotent” operations. Meaning, no changes to state and no side effects. The “record” method is of course recording new transactions, so this has violated the contract of GET.

They’re clearly unaware that other verbs exisit. POST should have been used for this non-idempotent operation, but update and delete would have also been needed to avoid the ugly use of “method=record”.

Why this matters: I had to be extremely careful to ensure that my get requests are issued once and once only. Every call made to this API looked more or less identical because the very important “method=” is buried in an unreadable list of query params.

Implementing their own authentication scheme

The very weird process of hashing your API Key and Secret is clearly an authentication mechanism that they’ve invented themselves. Why? HTTP has a specified and well understood form of authentication – Basic Authentication. Under the standard, all I would have to do is send a “Authorization” header like this:

Authorization: Basic QWxhZGRpbjpPcGVuU2VzYW1l

The odd looking string there is the username (API key) and password (secret) separated by a colon and then base64 encoded.

Instead, they want me to SHA-256 my key and secret before sending in a weird custom header.

I imagine their reasoning here is that SHA-256 is a secure one way hash that can’t be intercepted and reversed. (The Base64 encoding is definitely not secure). This is totally unnecessary – if they’d simply mandated HTTPs, then the traffic would be automatically encrypted, including the username and password. My guess is they’ve had a business directive asking them to not insist on HTTPs, and they’ve tried to fix that by rolling their own (almost certainly broken) encryption scheme.

A fundamental rule of security is that you should never roll your own security scheme, because it *will* be flawed.

Why this matters: I don’t care that the bank might get hacked, but I do care about the wasted day I spent trying to comply with their weird hashing rules. If they’d done it properly, my REST Client would have been able to handle the key and secret through a simple method call.

Bad return codes

This one wins them the jackpot. Every one of their API calls returns “Success!” (HTTP 200), until you check the body string and find out it actually failed. So I’m forced to write client code like this:

ResponseBody response = rest.get("big ugly uri");
if (response.getEntity().equals("Transaction Suceeded"))
{
 // continue
}

YES – they have misspelled “Suceeded” (should be two c’s). So – when they fix this typo my code will instantly break. Thanks, Bank.

Why this matters: I had to spend a long time probing their API to find out the strings they’re returning. I now have brittle string checks which are very likely to break at any time they decide to change those strings. And they will.

In many REST textbooks, they get themselves excited about HATEOAS. Forget that, the basics of URIs, Representations, HTTP Verbs, HTTP Return Codes and Security are all fundamental. Not many get all of these right, and a huge number get them ALL wrong. Don’t be like the bank.

On our Spring REST course, we set a programming challenge, part of which is to design a REST API – you can see that video here – it’s a bit long but there are some interesting decisions to make. Subscribe and you get the full course!

How to attach a Debugger to a running Tomcat or Glassfish instance

This is a frequently asked question from many of our customers at Virtual Pair Programmers, so I thought a blog post to capture the details would be in order. I’ll focus on Tomcat and Glassfish here, because we use them on our courses – but the details are the same for other servers.

  • (for Glassfish) 1) Run the server under debug. The easiest way is to run as normal, then go to the admin console. Go to configurations -> server-config (not default config) -> JVM Settings. Click Debug Enabled. These options will be different on different glassfish versions (I’m on 4), but you should be able to find them. Check the port that the debugger will run on – it will be part of the debug options and usually the default is 9009

    (for Tomcat) 1) Add a JVM option called “agentlib” to your startup script. On our courses, we use a bootstrap script called startup.bat, and you can edit it to look like this:

    cd ./tomcat/bin/
    java -Dsun.lang.ClassLoader.allowArraySyntax=true -agentlib:jdwp=transport=dt_socket,address=9009,server=y,suspend=n -jar bootstrap.jar
    

    (note: we use a simple bat file for bootstrapping Tomcat on our courses to simplify support: if you’re not using this script, then you need to put the JVM options in a new file bin/setenv.bat. Full details can be found here: )
  • 2) Restart the server (in Glassfish, a link may appear at the top of the page you can click. Otherwise, run the stopserv script and then startserv)
  • 3) Remember to add breakpoints in your code where required *AND re-deploy*. I sometimes forget to do this and wonder why I don’t hit any breakpoints.
  • 4) Now you can attach Eclipse to the debugger:

    a) Debug icon -> Debug Configurations
    b) Click “Remote Java Application”
    c) click the tiny icon at the top left – it is for “new session”
    d) Enter the correct port number you noted earlier (we suggested 9009)
    e) click the debug button.

  • 5) I find this odd: you won’t see anything special at this stage, you have attached to the running server *in the background*. There won’t be a console window and you won’t switch to debug perspective.
  • 6) You now need to hit a breakpoint, so to do this, exercise your code. This may be visiting one of your webpages, or running a test harness.
  • 7) When your client code causes a break to trigger on the server code, your run should be interrupted with a request to switch to the debug perspective, and you can now step through the code as usual.

I hope that’s useful!

Writing a Custom HTTP Message Converter in Spring

The Spring Webservices course got so big that we had to cut a few minor topics, and I promised on the video that I would write some blog posts covering them. Here’s the first of them, how to write a “Custom Message Converter”.

You probably don’t need to do this very often – I’ve never had to do this “in real life”. But it is a useful exercise to get a better understanding of what those message converters are doing.

Recall that in Spring, a MessageConverter is a class that is capable of converting a regular Java domain object to a REST representation (and back again). Spring has a small set of default converters already built in, but the two main ones are for JSON (most common representation used in REST) and XML.

For this exercise, let’s assume that for some reason, our REST application needs to support YAML as well. YAML is Yet Another Markup Language (literally) that aims to be simpler than XML. It’s used a lot in Rails.

As a starting point, I’ve fired up the REST project that we built on the training course. I’ve also started up the standard Spring REST shell:

baseUri mywebapp
get /customers

< 200 OK
< Server: Apache-Coyote/1.1
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Thu, 12 Feb 2015 17:56:33 GMT
<
{
  "customers" : [ {
    "customerId" : "100029",
    "companyName" : "Acme",
    "email" : null,
    "telephone" : null,
    "notes" : "No Notes",
    "calls" : null,
    "version" : 1,
    "links" : [ {
      "rel" : "self",
      "href" : "http://localhost:8080/mywebapp/customers/customer/100029?fullDet

As on the course, if the client wants XML instead, they can change the accept headers:

headers set --name accept --value application/xml

And now we repeat the get request….

get /customers
> accept: application/xml

< 200 OK
< Server: Apache-Coyote/1.1
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Thu, 12 Feb 2015 18:04:06 GMT
<
<<customers><customer><companyName>Acme</companyName
><customerId>100029</customerId> ... lots of XML snipped

But there is no YAML message converter installed by default in Spring….

headers set --name accept --value application/yaml
get customers

> accept: application/yaml

<406 NOT_ACCEPTABLE

So let’s write a YAML Message Converter!

Step 1: Add the JAR file for YAML

One Java YAML parser is called SnakeYAML (code.google.com/p/snakeyaml). You can download the JAR from there, but if you have done our course, I actually supplied this JAR in the “Additional JARs” folder. So pull it from there and add it to your build path.

This library is very easy to use. If you want to try it out, you can easily convert an object into YAML (and back again) in a test harness.

public class TestYaml 
{
 public static void main(String[] args)
 {
  Customer c = new Customer("10012", "Acme","Notes");
  
  Yaml yaml = new Yaml();
  System.out.println(yaml.dump(c));
 }
}

This gives an output like this:

!!com.virtualpairprogrammers.domain.Customer
calls: []
companyName: Acme
customerId: '10012'
email: null
notes: Notes
telephone: null
version: 0

Step 2: Write the converter

This is the bulk of the work. To write a message converter, extend the Spring AbstractHttpMessageConverter, and override the three methods as below.

  • readInternal() describes how Spring should convert the data (YAML) into a Java object.
  • writeInternal() is the opposite – it generates a YAML String from a Java object (this will be done in a similar way to our test above).
  • The supports() method is used to determine whether the converter actually supports conversion to and from the type of object in question. You might decide that you’re not going to support collections for example. We’ll simply return true and support any object.

In the constructor, we call the superclass constructor, which requires a MediaType object to denote what the HTTP media type is. We’re supporting application/yaml.

The implementations of the read and write methods are fairly routine, we’re just using the SnakeYaml library. It takes a bit of fiddling with the API of the HttpInputMessage and HttpOutputMessage classes to get what you need. In the read method, the getBody() method returns a standard Java InputStream, which luckily SnakeYaml can accept. In the write() method, we have to convert the YAML String into a byte array so we can send it to the write() method of the HttpOuputMessage. It’s all a bit fiddly but straightforward in the end.


public class YamlMessageConverter<T> extends AbstractHttpMessageConverter<T>
{
 public YamlMessageConverter()
 {
        super(new MediaType("application","yaml"));
 }
 
 @Override
 protected T readInternal(Class<? extends T> arg0, HttpInputMessage arg1)
   throws IOException, HttpMessageNotReadableException 
 {
   Yaml yaml = new Yaml(new Constructor(arg0));
   T object = (T)yaml.load(arg1.getBody());
   return object;
 }

 @Override
 protected boolean supports(Class<?> arg0) {
  return true;
 }
 
 @Override
 protected void writeInternal(T arg0, HttpOutputMessage arg1)
   throws IOException, HttpMessageNotWritableException 
 {
  Yaml yaml = new Yaml();
  String result = yaml.dump(arg0);  
  arg1.getBody().write(result.getBytes());
 }
}

Step 3: Register the converter

The magic that makes the default message converters automatically happen is the tag in your Spring configuration.

We can add our new YAML Converter into the this tag:

<!-- This will automatically switch on the default httpmessageconverters -->
 <mvc:annotation-driven content-negotiation-manager="contentNegotiationManager">
  <mvc:message-converters register-defaults="true">
   <bean class="com.virtualpairprogrammers.messageconverters.YamlMessageConverter"/>
  </mvc:message-converters>
 </mvc:annotation-driven>

Note: the “register-defaults=true” is needed – without it, the default converters will not be registered and you will end up with only the YAML one.

And that’s it. We can now deploy the application and test:

headers set --name accept --value application/yaml
get customer/100029

< 200 OK
< Server: Apache-Coyote/1.1
< Content-Type: application/yaml
< Transfer-Encoding: chunked
< Date: Fri, 13 Feb 2015 12:55:47 GMT
<
!!com.virtualpairprogrammers.domain.Customer
calls: []
companyName: Acme
customerId: '100030'
email: null
notes: No Notes
telephone: null
version: 1

Our representation is now in YAML.

I hope this exercise may prove useful to someone – to be honest I’m not really interested in YAML, the main point of the exercise is to get an understanding of what those mysterious HttpMessageConverters are doing!

Minor bug in our Webservices course

I’ve discovered a minor fault in our Webservices course. We supply a JSON file containing a data graph – and there’s a missing curly bracket! This is important because without it, any attempt to record a call via the REST Shell will fail with a JSON properties exception.

The file should look like this:

{"call":
 {"notes":"Customer called to complain about late delivery.",
         "timeAndDate":"2014-12-05T04:00:00Z"},

 "actions":[{"details":"Return call.",
             "requiredBy":"2016-12-09",
             "owningUser":"rac",
             "complete":false},
            {"details":"Check handled ok",
             "requiredBy":"2016-12-25",
             "owningUser":"rac",
             "complete":false}
           ]
}

The missing curly bracket is added to the end of the line with the timeAndDate.

Many apologies for the error, I hope it hasn’t caused too many problems.

Decent settings for DBCP Connection Pools

The Spring Framework course from 2009 is the first course that we’ve re-recorded at VirtualPairProgrammers. Although surprisingly little has changed in Spring since then, we felt it was time to polish the course up a little, to use the latest Spring 4, and in particular to use a more modern format for the video – with the second edition you’ll be able to view it on iPads and mobile devices, as with most of our other courses.

Note: everyone who bought the first edition of the course will automatically receive the second edition on the day of release – currently slated for around the 14 March 2014, but there may be delays as we complete the editing process.


I have actually made very few changes from the original. One area that I felt worthy of update was in our choice of connection pool. In the first edition we used the Apache DBCP connection pool, largely because it was the pool of choice at that time for the reference manual.

Since then, it’s fair to say that DBCP has come in for a lot of criticism, and other pools such as C3PO, Proxool or the Tomcat pool have become more popular.

There’s a great debate about this at StackOverflow (see here: https://stackoverflow.com/questions/520585 – a shame they closed the question as “not constructive” because it most certainly was constructive).

In the end, however, I decided to continue using DBCP for the second edition, partly to keep consistency with the old course, but also because actually DBCP isn’t that bad – we’ve used it successfully on several large scale projects with high traffic.

I think the biggest problem with DBCP is that the defaults are so poor. If you configure DBCP with just a driver, url, user and pass, then you’re going to end up with a  pool that soon locks up.

On the re-recorded version I alert the viewers to this, and tell you that you really need to tweak the pool to bring it to a performant level. But there isn’t time on the course to get bogged down in this, so I pointed the viewers to this blog post, where some more sensible values can be found.

Our default settings are:

  • maxActive = 150
  • maxIdle = 10
  • minIdle = 5
  • initialSize = 5
  • minEvictableIdleTimeMillis = 1800000
  • timeBetweenEvictionRunsMillis = 1800000
  • maxWait = 10000
  • validationQuery = “SELECT 1”
  • testOnBorrow=true
  • testOnReturn=true
  • testWhileIdle=true

And you set each of these properties in the Spring XML in the same way you set the driver etc. Eg <property name=”maxActive” value=”150″/>

I’m not saying these values are good for any application – you need to test, tweak and tune, but at VPP we use these settings as a starting point, and they are in fact the exact settings we currently have on our live site. Our live site isn’t exactly high traffic in the Facebook/Google sense, but we do get heavy traffic when we release a new course, so these settings should be reasonably good for most average websites.

Having said that, you can also switch to other pools quite easily, but I wanted to capture these defaults somewhere.

New course released soon – Java Build Tools

Edit to add the course was indeed released on 26 September!

I’ve been working for the last few months on a course that many of our customers have asked for – a course that covers the two major Java build tools, Ant and Maven.

It will be available at VirtualPairProgrammers on 26 September 2013. I’ll be announcing it here and we’ll also be in touch if you’re on our mailing list, Facebook page and Twitter.

It has taken so long to record because a) I always take a long to time to record(!) but also b) both Ant and Maven contain so many little twists and turns, and I feel that any decent course should get at least a little bit deep.

I’m well aware that both Ant and Maven are a little bit old now (don’t get me wrong: they are both used in thousands of projects around the world – their value is enormous!), so I’ve spiced up the course with a third build tool – it’s much newer and much easier to use: Gradle.

Gradle isn’t used on as many projects, but I’m hoping it’s going to get more popular over time. Hopefully this course will help raise its profile a little!

The chapter list isn’t yet complete, but the structure will be:

  • Introduction: Why use build tools?
  • Part One: Ant. (around 3 hours across five chapters)
  • Part Two: Maven (again around 3 hours)
  • Part Three: Gradle (about 90 minutes and two chapters)

On all three parts of the course, I show how to create a build from scratch, and the end result is a web application deployed to Tomcat.

Once that’s released, I’m due to start a big new project…

Running our JavaEE course on Glassfish 4

We’ve had a few requests asking if our JavaEE course can be run on Glassfish 4 (at the time of writing, the latest version).

The quick answer is: yes, but be careful. You will be able to run all the way up to chapter 16 before you see any problems, and they are very minor. However, Glassfish 4 is not as good as version 3 at reporting errors, and you don’t gain any features that you will need to follow the course.

As the important thing is learning the fundamentals of JavaEE (and these haven’t changed in JavaEE 7), my advice is to install a Java 6 JDK, and then use the Glassfish 3.0 that we ship with the course. Glassfish 3.0 is more stable, and seems to run faster. You can always upgrade to a later version once you’ve finished the course and understand the concepts.

(Glassfish 3.0 uses JDK6, although I’ve blogged here about how you can use it with JDK7. You *can* use Glassfish 3.2 with JDK7, but there was a horrible bug in 3.2 that prevented redeployments – I blogged about that here).

However, some of you will want to use Glassfish 4 – perhaps you need an advanced feature, or your company/project are using. In that case, you can do the course just fine, but there are a few things to be aware of.

1: Spurious warnings and swallowed errors

You will notice as soon as you deploy an application that uses a database on the server (Chapter 10), you will get the following:

Command succeeded with Warning. Cannot create tables for application. The expected DDL file EmployeeManagement_employeeDb_createDDL.jdbc is not available. Cannot create tables for application EmployeeManagement. The expected DDL file EmployeeManagement_employeeDb_createDDL.jdbc is not available.

In fact, all this is saying is “we looked to see if you have a custom create tables script in a file, and you don’t”. But that’s not a problem, because we are using automatic creation of tables, and that will happen in the background. So don’t worry, it probably COULD create the tables.

So, it is a very annoying warning. But it gets worse. If you DO have an error in your application (for example, I forgot to annotate one of my injected EJBs), you will get exactly the same warning – but this time as an error. But it will give no further clue as to the real cause of the problem. For that, you need to check in the log (by default, this version logs directly to the console that you started the application server in). So don’t forget that your real problem is probably unrelated to creating tables. Check the console log, although you will have a lot of useless information to wade through.

For this reason, I advise avoiding using this version for the training – but if you decide to go ahead, budget for some extra debugging time.

2: Glassfish Libraries

On the course, I advise you to add external references to a large collection of jar files. This was due to a bug in the Glassfish 3, you now just need to add an external reference to gf-client.jar in the early chapters (you’ll need a few more later).

Note: gf-client.jar is now located in GLASSFISH_HOME/glassfish/lib

For the JSF chapters, you will need an external link to the GLASSFISH_HOME/glassfish/modules/javax.faces.jar

3: Differences in the UI

There are some minor changes to the UI. Thankfully the UI is hardly changed. But you may have to hunt around for a few menu items.

4: Chapter 16, SOAP Webservices

All chapters upto and including the SOAP webservice chapter should run as on the video. However, in Chapter 16 you must write a new class to represent your webservice. I blogged about this here – although you could get away with not doing so in earlier versions, you need to be careful to do this in Glassfish 4. It’s better engineering anyway!

5: Chapter 18, REST Webservices

In the video, we ask you to configure a servlet in your application. This servlet is provided by Glassfish, and it does the work of providing the web service to clients. This isn’t part of the JavaEE standard, and it is therefore subject to change. And it has. It was previously a Sun class, I guess for political reasons they’ve renamed it.

You will need to change your web.xml file. Substitute this for the corresponding XML that you add in this chapter:

<!-- Configuration for JAX-RS -->
  <servlet>
     <servlet-name>Jersey Web Application</servlet-name>
     <servlet->org.glassfish.jersey.servlet.ServletContainer</servlet-class>
   
     <init-param>
        <param-name>jersey.config.server.provider.packages</param-name>
        <param-value>com.virtualpairprogrammers.staffmanagement.rest</param-value>
     </init-param>

   <load-on-startup>1</load-on-startup>        
  </servlet>
  
  <servlet-mapping>
     <servlet-name>Jersey Web Application</servlet-name>
     <url-pattern>/webservice/*</url-pattern>
  </servlet-mapping>

Edit to add: our Blogging platform seems to add some odd characters in the extract above: if you see XML errors you can download my working web.xml from here.

6: Chapter 19, REST Client

In the training course, we use Jersey-Client version 1, this allows us to call REST based webservices from Java. Your Glassfish 4 is using Jersey version 2. You can continue to use exactly the same client, because the client doesn’t know or care that you have upgraded your server. (All the client is doing is issuing HTTP requests – there’s no problem with the version mismatch).

If you need to keep up to date on the client as well, you can download Jersey version 2 from here. If you do this, you will need the JAR files from their lib directory, plus all of the JARs in the ext directory.

Unfortunately, Jersey (client) version 2 uses a much altered API, and your client code will break. The changes aren’t that radical, but there are too many to list here (for example: all of the packages are renamed, and instead of calling web.get(), you have to call web.request().get()).

You can download the modified REST client here, and a new version of the class we wrote to test delete requests from here – compare these with the ones we used on the course.

However, to reiterate, you don’t have to use Jersey version 2 for the client. Version 1 will interoperate perfectly well with your new server.

Edit to add: I accidentally deleted the Client Application from our server, many apologies if you’ve tried to download this without success. It’s now been re-added.

7: Chapter 20, Security

These are the biggest changes because authentication isn’t specified in the JavaEE standard. Here are the changes:

* EJB Client: You need to change your external library to point to GLASSFISH_HOME/glassfish/modules/security-ee.jar (and change the reference to this file in your build.xml script). You will now import com.sun.enterprise.security.ee.auth.login.ProgrammaticLogin instead.

  When you add the user and password to the file security realm, be sure to use the “Server Config”. This didn’t exist on Glassfish 3.

  When you now call pl.login(username, password), you will find this method is now deprecated. You can still call it, but to avoid the deprecation warning, you can call pl.login(username, password.toCharArray());

* REST Client: You will now need to call:

Client client = ClientBuilder.newClient();
client.register(new HttpBasicAuthFilter("rac", "secret"));

Anything else?

A long post, but actually very few real changes. That’s as it should be – JavaEE usually introduces new features and avoids breaking changes. If I have missed anything, do contact me and I’ll update this blog post as and when things change!