Parsing a Maven POM III

Due to popular demand I moved the latest code for parsing a Maven POM (including it’s parent POMs) into a tiny Github Project:

https://github.com/fuinorg/utils4maven

Here is a short example how to use it:

// Reading the model loads everything from remote
// repository using JBoss Shrinkwrap Resolver!
Model model = MavenPomReader.readModel("org.fuin:objects4j:0.6.1");
System.out.println(model.getName());
// Should print 'Objects4J'

A full example can be found here: MavenPomReaderTest.java

Advertisement

Finding methods calls programmatically

In the current project a defect was raised with the following exception attached:

java.lang.ArithmeticException: Non-terminating decimal expansion; no exact representable decimal result.

After I dug into the code, I found the cause was simple. There was no rounding mode given for calculations that might produce big decimals, like in this example:

BigDecimal.ONE.divide(BigDecimal.valueOf(3));

Every senior developer knows about this trap, but it’s still easy to forget. A default of BigDecimal.ROUND_HALF_UP, applied when no rounding mode is defined explicitly, would have made the method more safe to use.

How is it possible to prevent such failures in the future and avoid dividing a single big decimal parameter ad infinitum? One could use Checkstyle rules, use FindBugs rules or create a simple unit test.

I chose the unit test, so the question became “How can I find all the methods that call a given method in Java?” This is not possible with the standard Reflection API, but I had already used OW2’s ASM for such tasks, so it wasn’t too hard to create a JUnit assertion for it.

Just include this in your unit tests:


// Path to your project's '*.class' files
final File classesDir = new File("target/classes");

// Can be used to exclude some files/packages
final FileFilter fileFilter = new FileFilter() {
    @Override
    public boolean accept(File file) {
	return !file.getPath().contains("my/pkg/to/exclude");
    }
};

// Define methods to find
final MCAMethod divide = new MCAMethod("java.math.BigDecimal",
	"java.math.BigDecimal divide(java.math.BigDecimal)");

final MCAMethod setScale = new MCAMethod("java.math.BigDecimal",
	"java.math.BigDecimal setScale(int)");

// Fails if any class calls one of the two methods
AssertUsage.assertMethodsNotUsed(classesDir, fileFilter, 
                                 divide, setScale);

The full source code can be found at GitHub.

Mixins with pure Java

Implementation of mixins using AOP (AspectJ) or source-code modification (JaMoPP)

In object-oriented programming languages, a mixin refers to a defined amount of functionality which can be added to a class. An important aspect of this is that it makes it possible to concentrate more on the properties of a particular behaviour than on the inheritance structures during development.

In Scala for example, a variant of mixins can be found under the name of “traits”. Although Java does not provide direct support for mixins, these can easily be added on with a few annotations, interfaces and some tool support.

Occasionally you read in a few online articles that mixins are incorporated into Java version 8. Unfortunately, this is not the case. A feature of the Lambda project (JSR-335) are the so-called “Virtual Extension Methods” (VEM).

Whilst these are similar to mixins, they do have a different background and are significantly more limited in functionality. The motivation for the introduction of VEMs is the problem of backward compatibility in the introduction of new methods in interfaces.

As “real” mixins are not expected in the Java language in the near future, this article intends to demonstrate how it is already possible to create mixin support in Java projects now, using simple methods. To do this, we will discuss two approaches: using AOP with AspectJ and using source-code modification with JaMoPP.

Why not just inheritance?

When asked at an event “What would you change about Java if you could reinvent it?James Gosling, the inventor of Java is said to have answered “I would get rid of the classes“.

After the laughter had died down, he explained what he meant by that: inheritance in Java, which is expressed with the “extends” relationship, should – wherever possible – be replaced by interfaces [Why extends is evil].

Any experienced developer knows what he meant here: inheritance should be used sparingly. It is very easy to misuse it as a technical construct to reuse code, and not to model a technically motivated parent-child relationship with it.

But even if one considers such a technically motivated code reuse as legitimate, one quickly reaches its limits, as Java does not allow multiple inheritance.

Mixins are always useful if several classes have similar properties or define a similar behaviour, but these cannot be reasonably modelled simply via slim relationship hierarchies.

In English, terms which end in “able” (e.g. “sortable”, “comparable” or “commentable”) are often an indicator for applications of mixins. Also, when starting to write “Utility” methods in order to avoid a code duplication in the implementation of interfaces, this can be an indication of a meaningful case of application.

Mixins with AOP

So-called Inter-type declarations are an extremely simple possibility for implementing mixins, offered by the AspectJ Eclipse project. With these, it is possible – among other things – to add new instance variables and methods to any target class.

This will be shown in the following, based on a small example in Listing 1. For this, we will use the following terms:

  • Basis-Interface Describes the desired behaviour. Classes which the mixin should not use can use this interface.
  • Mixin-Interface Intermediate interface used in the aspect and implemented by classes which the mixin is to use.
  • Mixin-Provider Aspect which provides the implementation for the mixin.
  • Mixin-User Class which uses (implements) one or more mixin interfaces.

// === Listing 1 ===

/** Base-Interface */
public interface Named {
    public String getName();
}

/** Mixin-Interface */
public interface NamedMixin extends Named {
}

/** Mixin-Provider */
public aspect NamedAspect {
    private String NamedMixin.name;
    public final void NamedMixin.setName(String name) {
        this.name = name;
    }
    public final String NamedMixin.getName() {
        return name;
    }   
}

/** Mixin-User */
public class MyClass implements NamedMixin {
   // Could have more methods or use different mixins
}

Listing 1 shows a complete AOP-based mixin example. If AspectJ is set up correctly, the following source text should compile and run without errors:


MyClass myObj = new MyClass();
myObj.setName("Abc");
System.out.println(myObj.getName());

It is possible to work quite comfortably with AOP variants, but there are also a few disadvantages which will be explored here.

First of all, inter-type declarations cannot deal with generic types in the target class. This is not absolutely necessary in many cases, but can be very practical. For example, it is possible to define the “Named” interface just as well with a generic type instead of “String”. It would then define the behaviour for any name types. The class used could then determine how the type of name should look.

A further disadvantage is that the methods generated by AspectJ follow their own naming conventions. This makes it difficult to search the classes using reflection, as you would have to reckon with method names such as “ajc$interMethodDispatch …”

Last but not least, without the support of the development environment, you cannot see the source code in the target class and are dependent on the interface declaration alone. This could, however, be seen as an advantage, since the using classes contain less code.

Appearance: Java Model Parser and Printer (JaMoPP)

An alternative to the implementation of mixins with AspektJ is offered by Java Model Parser and Printer (JaMoPP). Simply put, JaMoPP can read Java source code, present it as an object graph in the memory and transform (i.e. write) it back into text.

With JaMoPP, it is therefore possible to programmatically process Java code and thus automate refactoring or implement your own code analyses, for example. Technologically, JaMoPP is based on the Eclipse Modeling Framework (EMF) and EMFText. JaMoPP is jointly developed by the Technical University of Dresden and DevBoost GmbH and is freely available on GitHub as an open-source project.

Mixins with JaMoPP

In the following, we would like to take up the example from the AOP mixins and expand this slightly. For this, we will first define a few annotations:

  • @MixinIntf Indicates a mixin interface.
  • @MixinProvider Indicates a class which provides the implementation for a mixin. The implemented mixin interface is specified as the only parameter.
  • @MixinGenerated Marks methods and instance variables which have been generated by the mixin. The only parameter is the class of the mixin
  • provider.

In the following, we will also be expanding the interfaces and classes from Listing 1 with a generic type for the name. Only the class using the mixin defines which concrete type the name should actually have.


// === LISTING 2 ===

/** Base-Interface (Extended with generic parameter) */
public interface Named<T> {
    public T getName();
}

/** Mixin-Interface */
@MixinIntf 
public interface NamedMixin<T> extends Named<T> {
}

/** Mixin-Provider */
@MixinProvider(NamedMixin.class)
public final class NamedMixinProvider<T> implements Named<T> {

    @MixinGenerated(NamedMixinProvider.class)
    private T name;

    @MixinGenerated(NamedMixinProvider.class)
    public void setName(T name) {
        this.name = name;
    }

    @Override
    @MixinGenerated(NamedMixinProvider.class)
    public T getName() {
        return name;
    }
    
}

/** Special name type (Alternative to String) */
public final class MyName {
    private final String name;

    public MyName(String name) {
        super();
        if (name == null) {
            throw new IllegalArgumentException("name == null");
        }
        if (name.trim().length() == 0) {
            throw new IllegalArgumentException("name is empty");
        }
        this.name = name;
    }

    @Override
    public String toString() {
        return name;
    }

}

In the class which the mixin is to use, the mixin interface is now implemented again as shown in Listing 3. In order to “blend” the fields and methods defined by the mixin provider into the MyClass class, a code generator is used.

With the help of JaMoPP, this modifies the MyClass class and adds the instance variables and methods provided by the mixin provider.


// === LISTING 3 ===

/** Mixin-User */
public class MyClass implements NamedMixin<MyName> {
    // Could have more methods or use different mixins
}

In doing this, the code generator does the following. It reads the source code of every class, similarly to the normal Java compiler, and, in doing so, examines the amount of implemented interfaces.

If a mixin interface is present, i.e. an interface with the annotation @MixinIntf, the corresponding provider is found and the instance variables and methods are copied into the class which is implementing the mixin.

In order to initiate the generation of mixin codes, there are currently two options: using an Eclipse plug-in directly when saving or as a Maven plug-in as part of the build.

Installation instructions and the source code of both plug-ins can be found on GitHub in the small SrcMixins4J project. There is also an on-screen video available there, which demonstrates the use of the Eclipse plug-in. Listing 4 shows the how the modified target class then looks.


// === LISTING 4 ===

/** Mixin-User */
public class MyClass implements NamedMixin<MyName> {

    @MixinGenerated(NamedMixinProvider.class)
    private MyName name;

    @MixinGenerated(NamedMixinProvider.class)
    public void setName(MyName name) {
        this.name = name;
    }

    @Override
    @MixinGenerated(NamedMixinProvider.class)
    public MyName getName() {
        return name;
    }

}

If the mixin interface is removed from the “implements” section, all of the provider’s fields and methods annotated with “@MixinGenerated” will be deleted automatically. Generated code can be overridden at any time by removing the “@MixinGenerated” annotation.

 

Click on the following image to open a Flash video that demonstrates the Eclipse plugin:

Screenshot Eclipse

Conclusion

As native support of mixins in the Java language standard is not expected in the foreseeable future, it is currently possible to make do with just some AOP or source-code generation. Which of the two options you choose depends essentially on whether you prefer to keep the mixin code separate from your own application code or whether you want them directly in the respective classes.

In any case, the speed of development is significantly increased and you will concentrate less on inheritance hierarchies and more on the definition of functional behaviour.

Neither approach is perfect. In particular, conflicts are not automatically resolved. Methods with the same signature from different interfaces which are provided by different mixin providers will, for example, lead to an error in a class which uses both mixins.

Those seeking anything more would have to transfer to another language with native mixin support, such as Scala.

Interfaces with default implementation – Mixins with AspectJ

Some weeks ago, I implemented several classes for the CQRS Meta Model. I found myself repeating the same code over and over again, as it was not possible to extend the same base class.

Damn… I wished Java had Mixins! After a short look a Qi4j, which seems a bit too heavy for my little use case, I remembered AspectJ’s Inter-type declarations. You can declare members (fields, methods, and constructors) that are owned by other types!

So here we go – Let’s define some behavior:

/**
 * Something that has a comment.
 */
public interface Commentable {

    public void setComment(String comment);

    public String getComment();

}

Now we’re going to create an aspect that provides the default implementation for the above interface:

/**
 * Implements the behavior of an object that has a comment assigned.
 */
public aspect CommentableAspect {

    private String Commentable.comment;

    public final String Commentable.getComment() {
        return this.comment;
    }
    
    public final void Commentable.setComment(final String comment) {
        this.comment = comment;
    }
}

All you have to do now is to implement the interface:

/**
 * Class with a comment field. 
 */
public class TestClass implements Commentable {

    // All methods are already implemented by 
    // simply adding the interface!
    
}

That’s it! All the necessary fields and methods are now added by AspectJ. You can now concentrate on composing behavior instead of thinking about a hierarchy of subclasses.

If you’d like to, it’s also possible to override the provided default methods (Caution: You’d have to remove the “final” from the aspect!) . I personally prefer the Design for Extension principle, and my methods are always abstract, final, or have an empty implementation.

Combining Strong Typing and Bean Validation (JSR 303)

Sometimes, it’s nice to use strong typing instead of repeating the same checks all over the layers and tiers. The interesting thing is that making a class robust against misuse is very similar to using Java Bean Validation.

A classical approach may look like this:

public class User {

    private static final Pattern PATTERN = Pattern.compile("[a-z][0-9a-z_\\-]*");

    private String name;

    public User(String name) {
        super();
        if (name == null) {
            throw new IllegalArgumentException("name == null");
        }
        String trimmed = name.trim().toLowerCase();
        if (trimmed.length() == 0) {
            throw new IllegalArgumentException("length name == 0");
        }
        if (trimmed.length() < 3) {
            throw new IllegalArgumentException("length name < 3");
        }
        if (trimmed.length() > 20) {
            throw new IllegalArgumentException("length name > 20");
        }
        if (!PATTERN.matcher(trimmed).matches()) {
            throw new IllegalArgumentException("name pattern violated");
        }
        this.name = trimmed;
    }

}

Using Bean Validation, we could create a custom constraint instead:

@Size(min = 3, max = 20)
@Pattern(regexp = "[a-z][0-9a-z_\\-]*")
@Target({ ElementType.METHOD, ElementType.PARAMETER, ElementType.FIELD, ElementType.ANNOTATION_TYPE })
@Retention(RetentionPolicy.RUNTIME)
@Constraint(validatedBy = {})
@Documented
public @interface UserName {

    String message() default "{org.fuin.blog.UserName.message}";

    Class<?>[] groups() default {};

    Class<? extends Payload>[] payload() default {};

}

The User class now looks much better:

public class User {

    @NotNull
    @UserName
    private String name;

    public User(String name) {
        super();
        this.name = name;
    }

}

But now, the object has lost the ability to protect itself against misuse. It’s no longer a robust object. Maybe someone uses a validator to check if the object is valid; maybe not. In any case, it’s always possible to create invalid objects of this kind.

How about combining both techniques?

Let’s rename the UserName annotation into UserNameStr because it actually works on a string, and this way, we can also avoid a name clash with a new strong type we will create soon:

@Size(min = 3, max = 20)
@Pattern(regexp = "[a-z][0-9a-z_\\-]*")
@Target({ ElementType.METHOD, ElementType.PARAMETER, ElementType.FIELD, ElementType.ANNOTATION_TYPE })
@Retention(RetentionPolicy.RUNTIME)
@Constraint(validatedBy = {})
@Documented
public @interface UserNameStr {

    String message() default "{org.fuin.blog.UserNameStr.message}";

    Class<?>[] groups() default {};

    Class<? extends Payload>[] payload() default {};

}

Next, we create a base type for all such String based on strong typing:

public abstract class AbstractStringBasedType<T extends AbstractStringBasedType<T>> implements Comparable<T>, Serializable {

    private static final long serialVersionUID = 0L;

    private static final Validator VALIDATOR;

    static {
        VALIDATOR = Validation.buildDefaultValidatorFactory().getValidator();
    }

    public final int hashCode() {
        return nullSafeToString().hashCode();
    }

    public final boolean equals(final Object obj) {
        if (this == obj) {
            return true;
        }
        if (obj == null) {
            return false;
        }
        if (getClass() != obj.getClass()) {
            return false;
        }
        final T other = (T) obj;
        return nullSafeToString().equals(other.nullSafeToString());
    }

    public final int compareTo(final T other) {
        return this.nullSafeToString().compareTo(other.nullSafeToString());
    }

    public final int length() {
        return nullSafeToString().length();
    }

    protected final void requireValid(final T value) {
        final Set<ConstraintViolation<T>> constraintViolations = VALIDATOR.validate(value);
        if (constraintViolations.size() > 0) {
            final StringBuffer sb = new StringBuffer();
            for (final ConstraintViolation<T> constraintViolation : constraintViolations) {
                if (sb.length() > 0) {
                    sb.append(", ");
                }
                sb.append("[" + constraintViolation.getPropertyPath() + "] "
                        + constraintViolation.getMessage() + " {"
                        + constraintViolation.getInvalidValue() + "}");
            }
            throw new IllegalArgumentException(sb.toString());
        }
    }

    private String nullSafeToString() {
        final String str = toString();
        if (str == null) {
            return "null";
        }
        return str;
    }

    public abstract String toString();

}

The refactored UserName class now uses the Bean Validation API to perform a constraint check at the end of the constructor, which means one is no longer able to create invalid objects:

public final class UserName extends AbstractStringBasedType<UserName> {

    private static final long serialVersionUID = 0L;

    @NotNull
    @UserNameStr
    private final String userName;

    public UserName(final String userName) {
        super();
        this.userName = userName;

        // Always the last line in the constructor!
        requireValid(this);
    }

    public String toString() {
        return userName;
    }

}

The refactored User class is now even simpler and contains only a @NotNull annotation on the name property:

public class User {

    // Only null check here, because all other
    // checks are done by user name itself
    @NotNull
    private UserName name;

    public User(UserName name) {
        super();
        this.name = name;
    }

}

Here is a simple example using the UserName type:

public class Example {

    public static void main(String[] args) {

        Locale.setDefault(Locale.ENGLISH);

        try {
            new UserName(null);
        } catch (IllegalArgumentException ex) {
            // [userName] may not be null {null}
        }

        try {
            new UserName("");
        } catch (IllegalArgumentException ex) {
            // [userName] must match "[a-z][0-9a-z_\-]*" {},
            // [userName] size must be between 3 and 20 {}
        }

        try {
            new UserName("_a1");
        } catch (IllegalArgumentException ex) {
            // [userName] must match "[a-z][0-9a-z_\-]*" {_a1}
        }

        // Valid name
        System.out.println(new UserName("john-2_a"));

    }

}

If we don’t want to use the strong typing, it’s easy to use only annotations. This ability is helpful especially where you have to deal with non-Java clients. In these situations, a DTO may contain only a simple annotated String:

public class UserDTO implements Serializable {

    private static final long serialVersionUID = 1L;

    @NotNull
    @UserNameStr
    private String name;

    public UserDTO(String name) {
        super();
        this.name = name;
    }

}

Now strong types and Bean Validation can live in peaceful coexistence within your application. It’s a good idea to check if your GUI controls support such JS303 enhanced strong types before you start using it. Otherwise, you may lose the ability to validate upfront on the client (e.g. checking input length based on the @Size annotation).

@Ignore unit tests immediately if they fail!

In development teams, there is often discussion about using the @Ignore tag for failing tests.

In particular, developers who are very enthusiastic about writing unit tests often tend to be very dogmatic about it. They argue, “A test should never be ignored! It’s better to have the build server on Yellow if tests fail. That way we always have an overview of our problems and are forced to fix them!”

What might sound good at first is not a good idea, at least when it comes to large development teams.

If you have several commits from multiple developers and maybe even additional deliveries from different branches to the main development line, the build will definitely break from time to time. How long does it take to fix such an error? Sometimes it may be easy, but in some cases, it may take days to fix a broken test. Now let’s assume that we have implemented the rule that you should never ignore a test. This means that the build may stay Yellow for a long time.

If another developer now updates his workspace with the latest version from the trunk and runs the unit tests locally, he will see failing tests even if he didn’t make any mistakes! That’s pretty bad itself. But he then checks the build server and, after some time looking around, he realizes it wasn’t his fault, as the server is already in the Yellow state. Should he commit now? If he does, he will receive endless emails from the build server about the broken build state that he didn’t cause. If not, his work and maybe the work of others will be blocked until the test is fixed. In a typical, pretty hectic IT project, with it’s almost impossible to meet deadlines, this is not really an option.

What is the solution?

  1. If you cause a build to break, @Ignore the failing test immediately and commit the change.
  2. Now that the build server is Green again (or Blue in the case of Hudson), start fixing the test.
  3. After you have fixed the test, commit the change and check to see if the build stays Green.
  4. Have some kind of statistic page on the build server that lists all ignored tests – this allows you to easily track the tests that are currently disabled.

Don’t get me wrong: all failing unit tests should be fixed as fast as possible! But my failure when committing a change should never constrain other team members from doing their work.