Solving a Parallel Streams Puzzler in Java 8

This blog post was co-written with Richard Warburton and reviewed by Brian Foster from O’Reilly.

In this post, you will learn about a different approach from traditional Java programming that helps you to use parallel streams the right way in your code.

Let’s say that you need to process the values of a list of transactions by accumulating them into a particular bank account. The class Account below provides three simple methods to process a transaction, add a certain amount to the total balance, and return the total balance.

class Account {
    private long total = 0;
    public void process(Transaction transaction) {
    public void add(long amount) {
        total += amount;

    public long getAvailableAmount(){
        return total;


Let’s say you have a list of transactions objects available, you can simply iterate through the list and process each transaction one by one using your bank account:

Account myAccount = getBankAccountWithId(1337);
for(Transaction transaction: transactions) {

Inspired from the code above, you may be tempted to use the Streams API to solve this problem as follows:

What’s the problem here? You are modifying the state of the account using an inherently sequential approach (i.e., you are iteratively updating its state.)

But what happens if we run this code in parallel using parallel streams? Let’s use a generated sample of transactions to test:

List<Transaction> transactions
    = LongStream.rangeClosed(0, 1_000)

You can now run the code in parallel and print the output:


You will find out that you will get different results with different execution runs. For example, when we ran it:

The total balance is 448181

The total balance is 421258

The total balance is 398291

This is very far off the correct result, which is 500500! In fact, what is happening is that you have a data race on each access to the field total. Multiple threads are trying to read, modify and update the shared state of the bank account. As a consequence, they are stepping on each other’s toes which leads to unpredictable outputs.

What’s the solution? You may be tempted to simply refactor the add method to be synchronized. But this is a bad solution because it adds further thread contention. In other words, your threads are waiting on the result of another before they can proceed. Although using AtomicLong doesn’t require a global lock, the same principle remains: you want to let threads work independently without waiting on one another.

The Streams API is designed to work correctly under certain guidelines. In practice, to benefit from parallelism, each operation is not allowed to change the state of shared objects (such operations are called side-effect-free). Provided you follow this guideline, the internal implementation of parallel streams cleverly splits the data, assigns different parts to independent threads, and merges the final result. A more idiomatic form of solving the initial problem is as follows:

long sum = transactions.parallelStream()

While it may look appealing to use parallel streams because it is very simple to do so (after all, it’s only a parallel() or parallelStream() call), code that works sequentially can often not work as expected in parallel. In addition, using parallel streams doesn’t guarantee that the code will run any faster (it may actually run slower!) There are many caveats to consider including computation cost per element, size of the data, and characteristics about the data source. The code for the above sample is available here.

You can learn more about recommended Java 8 techniques and guidelines in our one-day online course Refactoring Legacy Code with Java 8 on July 19th, 2016.

If you are interested in in-person Java masterclasses on Java 8, software design or secure programming, you can find more courses from Raoul and Richard at


This blog post is a merge of two blog posts I wrote a few years ago.


There’s a famous lightening talk given by Gary Bernhardt about Javascript and Ruby oddities.
I would like to start a series of blog posts documenting some oddities in the Java language for fun! I’ll explain why or where these oddities come from with reference to the Java Language Specification when possible. I hope you learn some new things. Feel free to email or tweet me if you would like to add to the list.

Array Declarations

Java programmers can declare array variables in several ways:

int[] a;
int b[]; // allowed to make C/C++ people happy

However, the grammar doesn’t enforce a particular style for arrays of dimensions greater than one. The [] may appear as part of the type, or as part of the declarator for a particular variable, or both. The following declarations are therefore valid:

int[][] c;
int d[][];
int[] e[]; // oops
int[][] f[]; // oops

This mixed annotations is obviously not recommended by the Java Language Specification (Array Variables) as it can lead to confusions and is reported by code convention tools such as checkstyle.
This can be taken to the extreme. The following method signature in a class or interface declaration will be accepted by the standard Javac parser:

public abstract int[] foo(int[] arg)[][][][][][][][][][][];
The return type of the method foo is int[][][][][][][][][][][][].

In fact, the grammar of ClassBodyDeclaration is defined as follows:

ClassBodyDeclaration =
.. | TypeParameters (Type | VOID) Ident MethodDeclaratorRest | ..

MethodDeclaratorRest =
FormalParameters BracketsOpt [Throws TypeList] ( MethodBody | [DEFAULT AnnotationValue] ";")

BracketsOpt = {"[" "]"}

The BracketsOpt rule allows a sequence of [] to be inserted after the formal parameters definition.
The relevant lines within start at 2938.

Array Covariance

Java arrays are covariant. This means that given a type S which is a subtype of a type T then S[] is considered a subtype of T[]. This property is described in the Java Language Specification (Subtyping among Array Types). This property is known to lead to ArrayStore exceptions as documented in the Java Language Specification: (Array Store Exception). For example:

Object[] o = new String[4];
o[0] = new Object(); // compiles but a runtime exception will be reported

Arrays were made covariant because before the introduction of generics it allowed library designers to write generic code (without type safety). For example, one could write a method findItems as follows:

public boolean findItems(Object[] array, Object item)

This method will accept arguments such as (String[], String) or (Integer[], Integer) and in a sense reduces code duplication since you don’t need to write several methods specific to the types of the arguments. However, there is no contract between the element type of the array that is passed and the type of the item that needs to be found.

Nowadays one can use generic methods (making use of a type parameter) to achieve the same mechanism with additional type safety:

public <T> boolean findItems(T[] array, T item)

Integer Caching

int a = 1000, b = 1000;
System.out.println(a == b); // true
Integer c = 1000, d = 1000;
System.out.println(c == d); // false
Integer e = 100, f = 100;
System.out.println(e == f); // true

This behaviour is documented in the Java Language Specification (Boxing Conversion):

If the value p being boxed is true, false, a byte, or a char in the range \u0000 to \u007f, or an int or short number between -128 and 127 (inclusive), then let r1 and r2 be the results of any two boxing conversions of p. It is always the case that r1 == r2.

For those curious, you can look up the implementation of Integer.valueOf(int), which confirms the specification:

public static Integer valueOf(int i) {
    assert IntegerCache.high <= 127;
    if (i >= IntegerCache.low && i <= IntegerCache.high)
        return IntegerCache.cache[i + (-IntegerCache.low)];
    return new Integer(i);

Dangerous Method Overloading

List<Integer> list = new ArrayList<>(Arrays.asList(1,2,3));
int v = 1;
System.out.println(list); // prints [1, 3]

List<Integer> list = new ArrayList<>(Arrays.asList(1,2,3));
Integer v = 1;
System.out.println(list); // prints [2, 3]

The java.util.List interface describes two methods named remove.

  • The first one is remove(int). It removes an element from the list based on its index, which is represented by a value of type int (note: an index starts at 0).
  • The second one is remove(Object). It removes the first occurrence of the object passed as argument.

This is referred to as method overloading: the same method name is used for describing two different operations. The choice of the operation is based on the types of the method parameters. In academic terminology we will say that it is an example of ad-hoc polymorphism.

So what happens in the piece of code above? The first case is straightforward as we pass a variable of type int and there’s a signature for remove which expects exactly an int. This is why the element at index 1 is removed.

In the second case, we pass an argument of type Integer. Since there is no signature for remove that directly takes an Integer parameter, Java tries to find the closest matching signature. The Java Language Specification (Determine Method Signature) states that resolution based on subtyping comes before allowing boxing/unboxing rules. Since java.lang.Integer is a subtype of java.lang.Object, the method remove(Object) is invoked. This is why, the call remove(v) finds the first Integer containing the value 1 and removes it from the list.

Note that this problem wouldn’t exist if the java.util.List interface differentiated the two remove operations with two different method names: removeAtIndex(int) and removeElement(Object). For those interested in getting more views about method overloading, there is a famous paper from Bertrand Meyer on the topic.

Array Initializer Syntax Curiosity

Java just like C and C# allows a trailing comma after the last expression in an array initializer. This is documented in the Java Language Specification (Array Initializer).

However, what if the initializer contains no expression? This is where Java differs from other languages like C and C#:

// Java
int a[] = {}; // valid
int b[] = {,}; // also valid, an array of length 0 &amp;amp;amp;gt;:o
// C
int a[] = {,}; // error: expected expression before ',' token
// C#
int a[] = {,}; // Unexpected symbol ','

The Type of a Conditional Expression

// credit to fragglet
Object o = true ? 'r' : new Double(1);
System.out.println(o); // 114.0
System.out.println(o.getClass()); // class java.lang.Double

This looks a bit odd. The conditional expression is true, so you might expect that the char ‘r’ would be boxed into java.lang.Char.

How did we end up with java.lang.Double as the runtime type of o? The value 114.0 looks suspicious as well – but we might guess that it’s the ASCII value which corresponds to the character ‘r’. But why is it ending up in a numeric type?

Let’s take a step back, and examine the general question – which is: what should the type of the conditional expression be if the type of the second and third operand are different?

Java has a set of rules to determine this as explained in the Java Language Specification (Conditional Expression).

In this case, the rules say that first of all the third operand is unboxed to the primitive type double. This is specified by the binary numeric promotion rules. After that, a more familiar rule kicks in – the promotion rule for doubles.

This says that if either operand is of type double, the other is converted to double as well. This is why the second operand of type char is widened to a double.

The second and third operand have now the same type and this is the resulting type of the conditional expression – so the expression’s type is the primitive type double (and it’s value is now the primitive value 114.0). Finally, since we are assigning the result of the conditional expression to a variable of type Object, Java performs assignment conversion. The primitive type double is boxed to the reference type Double (java.lang.Double).

Note that such a mechanism wouldn’t be needed for conditional expressions if Java restricted the second and third operands to be strictly of the same type. An alternative option could be union types.

Java’s Intersection Types: Practical Example

This blog post was co-written with Richard Warburton

In a nutshell

Generic types were added to Java way back in 2004 and have had extensive usage by different libraries and application developers ever since. Not all generics features have had equal usage though. In this blog post we’re going to explore Intersection Types: which are both one of the lesser-known features that Java Generics offers.

Intersection types are a form of generic type that look a bit like <T extends A & B>, where T is a type parameter and A and B are two types. In order to understand intersection types we first need to talk about how the extends keyword is used in generics. If you see a generic type parameter like <T extends Comparable> then it means that you are going to require our generic type (T) to only be those classes that implement the interface Comparable. Remember when we say “extends” in generic types that could refer to either extending a class or also implementing an interface.

Intersection types are for situations where we want to be really greedy with our requirements! We want to say only allow classes that extend both classes in the case of our intersection types. But, why on earth would we want to do this? Let’s walk through a worked example to understand intersection types properly.

DataInputStream and RandomAccessFile objects

Let’s suppose we’ve got a Person and every Person object has a name and an age that are instantiated in the constructor.

public Person(String name, int age)

Our system is trying to de-serialise person instances from an input stream, like a storage file or a network connection. In order to do this we’ve got a read method that takes a DataInputStream. It reads out the name as a Unicode string and the age as an int. After the Person instance has been read out the DataInputStream gets closed.

private static Person read(DataInputStream source) {
    try(DataInputStream input = source) {
        return new Person(input.readUTF(), input.readInt());
    catch (IOException e) {
        return null;

After this code has been deployed in production for a while a new requirement appears. Software development is a job never done. The new requirement requires the code to de-serialise the data from a RandomAccessFile object. Ideally you would like to re-use the logic of the read method which accepts a source of type DataInputStream. Unfortunately, an object of type RandomAccessFile is not a subtype of DataInputStream so this won’t work.

Dummy interfaces

What can you do? You can notice that both DataInputStream and RandomAccessFile implement the interfaces DataInput and Closeable. Consequently, you could refactor the method read by introducing a “dummy” interface whose only purpose is to create a type that extends both DataInput and Closeable:

interface DataInputCloseable extends DataInput, Closeable {}

You can now refactor the method read to use this interface as follows:

private static Person read(DataInputCloseable source) {
   try(DataInputStream input = source) {
       return new Person(input.readUTF(), input.readInt());
   catch (IOException e) {
       return null;

Thanks to this refactoring, you have gained both code re-use and flexibility. You are now able to re-use the same implementation of the method read but pass DataInputStream objects and RandomAccessFile objects.

Using intersection types

Nonetheless, you had to introduce a kind of “dummy” interface whose only purpose is to create a new type which extends two types (DataInput and Closeable). Not only that, but because you were depending upon classes in the core JDK libraries you couldn’t retrofit that interface to the actual implementing classes. This is a situation where intersection types can be convenient. They let you do just that without introducing an unnecessary interface or class in your code. You can refactor the method read as follows:

private static <I extends DataInput & Closeable> Person read(I source) {
    try(I input = source) {
        return new Person(input.readUTF(), input.readInt());
    catch (IOException e) {
       return null;

In the code above, you introduced an intersection type <I extends DataInput & Closeable> which says the type parameter I extends both DataInput and Closeable. You can then use this type parameter as argument to the method to indicate the input should extends both DataInput and Closeable.


In this blog post, we showed that intersection types can be convenient in situations where you need “to extend from” two, or more, types in an existing library but you don’t have an existing interface or class to model that. In many cases this is unnecessary. Often introducing new types via an interface or class gives us an explicit name which helps the readability of your code. If the concept and name makes sense then that’s what we would recommend doing. In this case that wasn’t possible and intersection types helped save the day.

Interested in Java Training? Check out my training business