Monday, 28 January 2013

A better implementation of Visitor pattern

The Visitor pattern is possibly the most complicated design pattern you will face. Not only explanations, implementations and examples you may find in the Internet are confusing in general and many times divergent from one another, but also the definition of what the Visitor Pattern is can be many times obscure and rarely explained properly with examples and applications in the real world.

Visitor Pattern: which one?



In fact, several variations of what we call Visitor Pattern do exist. Many times these variations are simply called "Visitor Pattern", without suffixes which would be necessary to differentiate them. This certainly causes confusion. In addition, lots of articles about this pattern fail to explain properly what it is, how it works and how it should be employed in real world applications. Add to that some intrinsic complexity this pattern involves and sometimes confusion caused by the nomenclature or definitions adopted.

This article aims to present a clear, non-ambiguous and non-misleading definition of what the Visitor pattern is, with examples in the real world. We will demonstrate how a pair of interfaces is plenty enough for solving a vast range of problems, without any need of any modification in this pair of interfaces in order to accommodate unforeseen situations. Complex use cases can be addressed by defining additional interfaces, which extend the original concept, but without changing the original pair of interfaces in any way.

Definition

From Wikipedia we read: In object-oriented programming and software engineering, the Visitor Design Pattern is a way of separating an algorithm from an object structure on which it operates. A practical result of this separation is the ability to add new operations to existing object structures without modifying those structures.

Let's clarify a little bit this definition: In object-oriented programming and software engineering, the Visitor Pattern is a way of separating a computation from the data structure it operates on.

In particular, we will employ the word computation instead of algorithm because the former is more restrictive and more specific than the later. This article shows that obtaining elements from a data structure also involves an algorithm, which could cause confusion. For this reason, it's better to simply avoid the word algorithm entirely.

Interpretation

The definition says that the Visitor Pattern separates concerns, i.e:
  •  on one hand you have data access which knows how a data structure can be traversed;
  •  on the other hand you are interested on a certain computation you need to be executed.

Besides, there's a very important restriction which is: you cannot change the original data structure. In other words:
  •  the data access knows the data structure but does not know anything about the computation to be performed;
  •  the computation knows how to operate on a certain element of the data structure but doesn't know anything about the data access needed to traverse the data structure in order to obtain the next element, for example;
  •  you are not allowed to amend the data structure in any way: imagine a data structure which belongs to a third-party library, which you don't have the source code;

A key point here is the introduction of the concept of data access. In other words, we are saying that the data structure is not accessed directly, but via some sort of data access logic. This will become clear later.

Use cases

Let's invent scenarios which slowly evolve in complexity. Our aim is to demonstrate why the Visitor Pattern is needed and how it should behave and operate.

Scenario 1: simple array


Suppose we have a very simple array of Integers. We need to visit all elements and calculate the sum of all these elements.

Anyone will immediately think on something like below, isn't it?

public int sumArray(Integer[] array) {
    int sum = 0;
    for (int i=0; i<array.length; i++) sum += array[i];
    return sum;
}


Yes. It solves the problem. Actually, the problem is so simple that you must be asking yourself why we need anything more complicated than a simple one liner. Isn't it?

OK. Let's go ahead to another use case.

Scenario 2: different data structures


This scenario is pretty similar to the previous one, but now we can have either a Integer[], a List<Integer> or a Map<T,Integer>. In this case, one immediately think on something like this:

public int sumList(List<Integer> list) {
    int sum = 0;
    for (int i=0; i<list.size(); i++) sum += list.get(i);
    return sum;
}

public <T> int sumMap(Map<T, Integer> map) {
    int sum = 0;
    for (Entry<T, Integer> key : map.entrySet()) {
    sum += map.get(key);
    }
    return sum;
}


OK. In this use case we can start to understand that there are different structures: Integer[], List<Integer> and Map<T,Integer> but, in the end, all you need to do is obtain an individual Integer from whatever data structure it is and add it to variable sum.

Let's complicate things a little more now.

Scenario 3: different data structures and several computations


This scenario is pretty similar to the previous one, but now we have several different computations: sum of all elements, calculate mean value of them, calculate the standard deviation, tell if they form an arithmetical progression, tell if they form a geometric progression, tell if they are all prime numbers, etc, etc, etc.

Well, in this case, you can figure out what will happen: you will face a proliferation of methods. Multiply the quantity of computations you are willing to provide by the number of different data structures you are willing to support. This proliferation of methods may not be desirable.

The point is: it does not make sense to implement the same logic over and over again just because the data structure is slightly different. Even when data structures are very different, what matters is that we are willing to perform a certain computation on all elements stored in it, does not matter if it is a plain array, a map, a tree structure, a graph or whatever.

The computation only needs to know what needs to be done once an element is at hand; the computation does not need to know how an element can be obtained.

Another aspect is that it may not be desirable or may not be convenient to touch existing code, or existing data structures, to be more precise.

Yet another aspect: Imagine that a certain computation requires additional variables, possibly associated to each element of the original data structure (or several data structures). These additional variables are only needed when you are performing the computations we described; they are not needed anywhere else. In certain constrained scenarios, you will not be allowed to waste memory like this.

Scenario 4: do not touch data structures

Imagine now that you have the problem proposed by scenario 3, but you cannot touch any data structures. Imagine that it's a legacy library in binary format; imagine that you don't have the source. You simply cannot add a single field anywhere in order to keep partial computations or keep response values. You cannot change the provided source code in any way simply because you don't have it. You obviously cannot add any methods intended to perform the computation.

What would be a solution in this case? Well... there's only one: you will have to implement something, whatever it is, outside the provided legacy library.

Conclusion of Use Cases

You may not be allowed to change the original source code in regards to the data structure (or data structures) you have. Or this may not desirable, like in the case of proliferation of methods we mentioned before. This may not be even possible, like in the case of the provided legacy library. This is definitely the most important aspect of the Visitor Pattern: you are not allowed to change the original data structures in any way.

This aspect is commonly misunderstood by many people posting on the Internet and even in many academic works. It's very common to see implementations where data structures are changed, like when the object model is changed; or data structures suddenly start to implement new interfaces, etc.

If you find articles proposing changes in the original data structures, does not matter how, you can be sure that these articles are misunderstanding what Visitor Pattern really means.

Introducing the Visitor Pattern

It's time to present how the Visitor Pattern works.

The Visitor Pattern is defined as a pair of interfaces: a Visitor and a Visitable. There's also some additional expedient your code needs to perform when you call implementations of these interfaces.

Let's learn by example, showing how it can be applied to some real world situations.

A naive Visitor

Remember: a visitor defines a computation

The idea here is that we need an interface which defines how a computation works in respect to a single data element. It does not matter how data elements are organised in the data structure at this point.

Once the interface is defined, we then define a class which performs the computation. Because the interface does not know anything about how data elements are organised in data structures, the implementing class does not have any dependency on how data is organised. The class only cares about details strictly related to how the computation needs to be done; not what needs to be done in order to obtain data elements.

For the sake of brevity, lets present the Visitor interface and demonstrate only one of the several computations we may be interested:

public interface Visitor<T> {
    public void visit(T element);
}

private class Sum<T extends Integer> implements Visitor<T> {
    private int sum = 0;

    @Override
    public void visit(T element) {
        sum += element;
    }

    public T value() {
        return sum;
    }
}


The interface Visitor defines a method which is responsible for receiving a single element from the data structure, whatever data structure it is. Classes which implement the Visitor interface are responsible for defining the computation, receiving one element at a time, without caring about how such element was obtained in the first place.

A naive Visitable

Remember: a Visitable defines data access

The idea is that we need an interface which defines a how data can be obtained in general.

Once the interface is defined, we then define a class which knows how data can be obtained from a given data structure in particular. It means to say that, if we have several different data structures, we will need several classes implementing the Visitable interface, one class for each data structure involved.

Notice that nothing was said about the computation. It's not responsibility of interface Visitable anything involving the computation: it only cares about how single data elements can be obtained from a given data structure.

For the sake of brevity, lets present the Visitable interface and demonstrate only one of the several possible data access expedients we may eventually need:

public interface Visitable<T> {
    public void accept(Visitor<T> v);

}

private class VisitableArray<T extends Integer> implements Visitable<T> {
    private final T[] array;

    public VisitArray(final T[] array) {
        this.array = array;
    }

    @Override
    public void accept(Visitor<T> v) {
        for (int i=0; i<array.length; i++) {
            v.visit(array[i]);
        }
    }
}


The interface Visitable defines a method which is responsible for accepting an object which implements interface Visitor: a Visitable does not know anything about the computation, but it knows how a single data element must be passed to another class which is, in turn, responsible for performing the computation.

Classes implementing the Visitable interface are responsible for traversing the data structure, obtaining a single data element at a time. Also, once a single element is obtained, the class knows how another step of the computation can be triggered for the current data element at hand.

Putting it all together

Your code must now perform the following steps:
  •  instantiate a Visitor;
  •  instantiate a Visitable
  •  bootstrap a Visitable with a Visitor;
  •  request the result of the computation;
It looks like this:

public int computeSumArray(final Integer[] array) {
    // instantiate a Visitor (i.e: the computation)
    final Visitor<Integer>   visitor   = new Sum<Integer>();

    // instantiate a Visitable (i.e: the data access to the data structure)
    final Visitable<Integer> visitable = new VisitArray<Integer>(array);

    // bootstrap a Visitable with a Visitor
    visitable.accept(visitor);

    // returns value computed by the Visitor
    return ((Sum)visitor).value();
}


Now imagine you'd like to calculate the mean of an array of Integers. All you need to do is defining a class called Mean and reuse everything else, like this:

public int computeMeanArray(final Integer[] array) {
    // instantiate a Visitor (i.e: the computation)
    final Visitor<Integer>   visitor   = new Mean<Integer>();

    // instantiate a Visitable (i.e: the data access to the data structure)
    final Visitable<Integer> visitable = new VisitArray<Integer>(array);

    // bootstrap a Visitable with a Visitor
    visitable.accept(visitor);

    // return value computed by the Visitor
    return ((Mean)visitor).value();
}


Now imagine you'd like to obtain a Mean from a List<integer> instead. All you need to do is defining this specific data access for this data structure and reuse everything else, like this:

public int computeMeanList(final List<Integer> list) {
    // instantiate a Visitor (i.e: the computation)
    final Visitor<Integer>   visitor   = new Mean<Integer>();

    // instantiate a Visitable (i.e: the data access to the data structure)
    final Visitable<Integer> visitable = new VisitList<Integer>(list);

    // bootstrap a Visitable with a Visitor
    visitable.accept(visitor);

    // return value computed by the Visitor
    return ((Mean)visitor).value();
}

Extended data structures

Now imagine a situation where you need to aggregate fields to a provided legacy data structure, where you don't have access to the source code.

For the sake of simplicity, let's assume there's a way to unmistakably identify data elements: for arrays, it could be employing an index, for maps, trees and more complex data structures, it could be employing some sort of key.

In the example below we'd like to take coordinates of capitals of countries whilst our data structure only contains country names but do not contain coordinates of their capitals. The solution consists on retrieving coordinates as part of the method accept. In the real world, it may be useful to keep additional fields stored in a data structure which keeps complimentary information relative to the original data structure. This is what we show below:

private class VisitableCapitalCoordinates<T extends Coord> implements Visitable<T> {
    private final T[] countries;
    private final Map<T,Coordinate> capitals; // for storage/caching, if required


    public VisitableCapitalCoordinates(
            final T[] countries; 
            final /* @Mutable */ Map<T,Coordinate> capitals) {
        this.countries = countries;
        this.capitals  = capitals;
    }

    @Override
    public void accept(Visitor<T> v) {
        for (int i=0; i<array.length; i++) {
            String country = countries[i];
            T coord = capitals.put(country, coord(country)) 
            v.visit(coord);
        }
    }


    //
    // private method: not defined in the interface
    //

    private Coordinate coord(T country) {
       return obtainCoordinateOfCapitalSomehow(country);
    }

}


In the example above, the most important changes happen in the class constructor and in private methods. The method accept keeps its original signature and only passes an element of the extended data structure, instead of passing an element of the original data structure.

In the method constructor, we receive two data structures: the original one and the extended one which must be previously created. We show this in the example below:

    final String[] countries = { "USA", "United Kingdom", "France", "Holland", "Belgium" };
    final Map<T,Coordinate> capitals = new HashMap<T,Coordinate>();
     
    // instantiate a Visitor (i.e: the computation)
    final Visitor<Coordinate>   visitor   = new PlotCoordinate<Coordinate>();

    // instantiate a Visitable (i.e: the data access to the data structure)
    final Visitable<Coordinate> visitable = new VisitableCapitalCoordinates<Coordinate>(countries, capitals);

    // bootstrap a Visitable with a Visitor
    visitable.accept(visitor);


We've seen in the previous section that our naive implementation of the Visitor Pattern
  •  has an interface Visitor which provides the computation;
  •  has an interface Visitable which provides the data access;
  •  does not require any modification on existing data structures.

Now let's complicate a little bit more this scenario.

Adding polymorphism

Let's imagine that an investor has an investment portfolio composed of several financial instruments, such as assets, options, bonds, swaps and more exotic financial instruments. Let's suppose we would like to forecast what would be the payoff of this portfolio given certain conditions.

The problem we are facing now is the fact that the data structure is not uniform anymore, like we have seen previously above. Now we have a portfolio which contains several different objects in it. We can think that a portfolio is actually a tree made of several different kinds of nodes in it. Several nodes are leaf nodes, such as an asset, an option or a bond. Some other nodes are not leaf nodes: they aggregate other nodes under it, such as swaps and exotic instruments.

What this use case adds to the initial problem is the fact that now there's not single type <T> which represents all nodes. To be more precise: even if all nodes derive from a base type , we still need to make sure we are handling a certain node properly, according to its type and also we are computing its value properly, according to the type hierarchy. For example:

Object
    Payoff
        ForwardTypePayoff
        NullPayoff
        TypePayoff
            StrikedTypePayoff
                AssetOrNothingPayoff
                CashOrNothingPayoff
                GapPayoff
                PlainVanillsPayoff


At first glance, we can think that we could define a pair of classes PolymorphicVisitor and PolymorphicVisitable which take the node type as a generic parameter. Unfortunately, this is not the case, in spite it makes a lot of sense.

Once Java Generics applies type erasure in general, our application will have troubles to decide between a PolymorphicVisitor<ForwardTypePayoff> and PolymorphicVisitor<NullPayoff> for example, because both will have types erased and will become PolymorphicVisitor<Object>. Going straight to the solution, without spending any more time explaining all intricacies, we need something like this:

public interface PolymorphicVisitor {
    public <T> Visitor<T> visitor(Class<? extends T> element);
}


The interface above provides a method visitor which is responsible for providing the actual Visitor for the element data. So, once you have the actual Visitor, you can delegate to the actual visitor. If the actual visitor was not found, you can try again using the PolymorphicVisitor of the base class, and so on, like this:

@Override
public void accept(final PolymorphicVisitor pv) {
    final Visitor<FixedRateBondHelper> v = (pv!=null) ? pv.visitor(this.getClass()) : null;
    if (v != null) {
        v.visit(this);
    } else {
        super.accept(pv);
    }
}


We also need to define interface PolymorphicVisitable, which works in conjunction with PolymorphicVisitor, like this:
public interface PolymorphicVisitable {
    public void accept(PolymorphicVisitor pv);
}


Notice that the actual visitor needs to be obtained given the class type informed, like shown below:

@Override
public <CashFlow> Visitor<CashFlow> visitor(Class<? extends CashFlow> klass) {

    if (klass==org.jquantlib.cashflow.CashFlow.class) {
        return (Visitor<CashFlow>) new CashFlowVisitor();
    }
    if ( klass == SimpleCashFlow.class) {
        return (Visitor<CashFlow>) new SimpleCashFlowVisitor();
    }
    if (klass==Coupon.class) {
        return (Visitor<CashFlow>) new CouponVisitor();
    }
    if (klass==IborCoupon.class) {
        return (Visitor<CashFlow>) new IborCouponVisitor();
    }
    if (klass == CmsCoupon.class) {
        return (Visitor<CashFlow>) new CmsCouponVisitor();
    }
    ...
}

Sample Code

Please branch project JQuantLib from Launchpad:

    $ bzr branch lp:jquantlib


or, alternatively clone from Github:

    $ git clone http://github.com/frgomes/jquantlib


In particular, you are interested on CashFlow hierarchy, under folder jquantlib/src/main/java

org/jquantlib/util/Visitable.java
org/jquantlib/util/PolymorphicVisitable.java
org/jquantlib/util/PolymorphicVisitor.java
org/jquantlib/util/Visitor.java
org/jquantlib/cashflow/Event.java
org/jquantlib/cashflow/FloatingRateCoupon.java
org/jquantlib/cashflow/Coupon.java
org/jquantlib/cashflow/CmsCoupon.java
org/jquantlib/cashflow/CashFlows.java
org/jquantlib/cashflow/FixedRateCoupon.java
org/jquantlib/cashflow/CashFlow.java
org/jquantlib/cashflow/AverageBMACoupon.java
org/jquantlib/cashflow/SimpleCashFlow.java
org/jquantlib/cashflow/CappedFlooredCoupon.java
org/jquantlib/cashflow/Dividend.java
org/jquantlib/cashflow/IborCoupon.java

Conclusion

The implementation of the Visitor Pattern we present here is scalable and flexible. This implementation does not need any interfaces or class methods which are crafted for a certain application in particular.

We showed that specific interfaces and classes are necessary only when there's polymorphism involved. We purposely defined PolymorphicVisitor and PolymorphicVisitable in order to take decisions regarding which should be the correct Visitable for traversing a specific data structure and should be the correct Visitor needed once we have a certain data element in our hands. These decisions are not responsibility of the simple Visitor and Visitable interfaces. For this reason, additional interfaces are definitely needed.

The Visitor Pattern as defined here conforms to the definition presented on the top of this article and does not require any change in existing data structures.

On the other hand, even with advantages gained with our implementation, the Visitor Pattern still keeps its relative high complexity. In general, code which employs the Visitor Pattern becomes obscure and difficult to be understood, which means that documentation is key in order to keep code organised and relatively easy to be understood.

-----------------------------------------------------

If you found this article useful, it will be much appreciated if you create a link to this article somewhere in your website. Thanks

[ First published by Richard Gomes on 17:16, 29 January 2011 (GMT) ]

Saturday, 12 January 2013

Implementation of multiple inheritance in Java

In this article we demonstrate how multiple inheritance can be implemented easily in Java.

Sometimes you need that a certain class behave much like two or more base classes. As Java does not provide multiple inheritance, we have to circumvent this restriction somehow. The idea consists of:
  • Create an interface for exposing the public methods of a certain class;
  • Make your class implement your newly created interface;
At this point, extended classes of your given class can now implement the interface you have created, instead of extending your class. Some more steps ate necessary:
  • Supposing you have an application class which needs multiple inheritance, make it implement several interfaces. Each interface was created as explained above and each interface has a given class which implements the interface.
  • Using the delegation pattern, make your application class implement all interfaces you need.
For example, imagine that you have
  • interface A and its implementation ClassA
  • interface B and its implementation ClassB
  • class ClassC which needs multiple inheritance from classes ClassA and ClassB
interface A {
    void methodA();
}

interface B {
    void methodB();
}

class ClassA implements A {
    void methodA() { /* do something A */ }
}

class ClassB implements B {
    void methodB() { /* do something B */ }
}

class ClassC implements A, B {

    // initialize delegates
    private final A delegateA = new ClassA();
    private final B delegateB = new ClassB();

    // implements interface A
    @Override
    void methodA() {
        delegateA.methodA();
    }

    // implements interface B
    @Override
    void methodB() {
        delegateB.methodB();
    }
}


If you found this article useful, it will be much appreciated if you create a link to this article somewhere in your website. Thanks

[ First published by Richard Gomes on 00:22, 11 March 2008 (UTC) ]

Strong type checking with JSR-308

In this article we demonstrate that strong type checking sometimes is not enough for the level of correctness some applications need. We will explore a situation where we semantic checking is needed to achieve higher levels of correctness. Then we will explorer how something like this can be done in C++ and how future improvements in JDK will allow us to do it in Java, obtaining better results than C++ can offer.

Problem

Imagine the following code snippet:
  double calc(double rate, double year) {
    return Math.exp(1+rate,year);
  }

  double rate = 0.45;
  double year = 0.5;

  double result1 = calc(rate, year);
  double result2 = calc(year, rate);
Notice the two calls of the method calc. Yet syntactically and semantically correct for the compiler, humans can easily determine that something will probably go wrong. Ideally, we'd like the compiler warn us about the error in order to shorten the development cycle.

The C++ solution

In C++ we can avoid such mistakes by...
typedef double Rate;
  typedef double Year;

  double calc(Rate rate, Year year);
If you are using a powerful IDE, it will tell you what are the correct order of arguments and you will be able to avoid mistakes.
Important: The C++ compiler still does not prevent you to pass arguments in the wrong order. You will not get a compilation error if you pass arguments in the wrong order!

The Java solution

In Java you dont have typedefs. One possible alternative would be defining mutable objects intended to hold primitive types but this solution is very inefficient. I will avoid spending your time visiting all the candidate solutions and ending up showing you they are not adequate for our needs. Let's go directly to what we need:
In the upcoming JDK7, we will have the possibility to use annotations in accordance with JSR-308. In a nutshell, it allows you to use annotations wherever you have a type in your code, and not only on declarations of classes, methods, parameters, fields and local variables. Let's examine an example:
private Double calc(@Rate double rate, @Time double time) {
  return new Double( Math.exp(1+rate, time) );
}

public method test() {
  @Rate double rate = 0.45;
  @Time double time = 0.5;

  // This call pass
  Double result1 = calc(rate, time);

  // This call *should* give us a compiler error
  Double result2 = calc(time, rate);
}
In the above specific example the code compiles fine in JDK6 but does not offer us the strong type checking it will be possible to use with JDK7.
If you have a nice IDE, it will tell you the correct order of parameters, showing you @Rate double instead of simply double.
 
Much like the C++ compiler, which was not able to detect the wrong order or parameters, javac will suffer from the same illness and will not detect the wrong order of parameters because annotations are meaningless to javac.

On the other hand, javac is able to execute annotation processors you specify in the command line. In particular, we can write an annotation processor which verifies if you are passing a @Rate where a @Rate is expected and so on. It means that JDK7 with help of JSR-308 annotation processors is able to detect the mistake we pointed out in the beginning of this article.

Going ahead, with JDK7 we can produce code with all the very strong type checkings we need to obtain very robust code. See the example below:
private @NonNull Double calc(@Rate double rate, @Time double time) @ReadOnly {
  // The following statement will fail because the return type cannot be null 
  if (condition) return null;
  // The following statement pass
  return new Double( Math.exp(1+rate, time) );
}

public method test() {
  @Rate double rate = 0.45;
  @Time double time = 0.5;

  // This call pass
  Double result1 = calc(rate, time);

  // This statement will fail because the receiver of calc became read-only
  result = 1.0;

  // This call *should* give us a compiler error
  Double result2 = calc(time, rate);
}

Conclusion

Thanks to new upcoming features of JDK7, programmers can now obtain robust code with very strong type checkings in Java which compare the quality of code written in C++.

See also



If you found this article useful, it will be much appreciated if you create a link to this article somewhere in your website. Thanks

[ First published by Richard Gomes on 21:22, 27 January 2008 (UTC)

Performance of numerical calculations

In this article we explore the controversial subject of performance of Java applications compared with C++ applications. Without any passion and certainly not willing to promote another flame war, we try to demonstrate that most comparisons are simply wrong because compare essentially very different things. We also present alternative implementations which address most issues.


Problem

Performance of numerical applications written in Java can be poor.


Solution

First of all, performance is a subject that becomes impossible to debate if we consider precision to the last millisecond. Even a piece of code written in assembly running in the same computer can present different execution times when run several times, due to a number of factors outside our focus in this article.

Talking about code written in Java, the subject becomes more complex because the same Java code will eventually end up on different assembly code on different platforms. In addition, there are different JVMs for different platforms and different JIT compilers for different platforms.

Running the same Java code running on a home PC and running on a IBM AIX server can result on huge differences in performance, not only because we expect a server to be much faster than a home PC but mainly because there are a number of optimizations present in IBM's JVM specifically targeting the hardware platform which is not available in a stock JVM for a generic home PC.

The tagline Java programs are very slow when compared with C++ programs is even less precise than what we just exposed. In general what happens is that very skilled C++ programmers compare their best algorithms and very optimized implementations against some Java code they copied from Java tutorials. Very skilled Java developers know that tutorials potentially work as expected but certainly are not meant to perform well.

Another point to consider is that default runtime libraries provided by Java cannot be expected to perform well. The same applies to other languages, including C/C++. This is true that C/C++ have some widely accepted libraries which perform very well but the point is that you will can potentially find something which performs better for the specific hardware platform you have, for the specific problem domain you have.

In the specific situation of Java applications, there are lots of techniques intended to improve performance of the underlying JVM and certainly there are several techniques which can be applied to Java programs in order to avoid operations which are not needed. In order to compare a benchmark test written in C++ against one written in Java, these aspects must be considered, otherwise we will be comparing code written by C++ gurus against code written by Java newbies.

When researching this subject, I've found the references listed below but I haven't spent any time copying source code and eventually changing it or anything else in order to perform the comparison the way I'd like to do. I preferred to adopt a certain "threshold" of I'd say 30%, which means that a difference of 30% or less implies we don't have enough precision. It does not mean that involved parties perform equal. It only means we don't have enough elements to compare with accuracy.

Taking the previous paragraphs into consideration, Java code can be compared with more or less equivalent C++ code:
On the other hand, big differences in performance tell us that we should try to identify reasons for poor performance and eventually propose something better.
Analyzing the previous list, we can identify that:
  • Data structures perform badly: list operations (one dimensional data structures) are 2 times slower whilst matrices can be even 3 times slower.
  • Sort algorithms involve data structures and certainly are affected by slowness of data structures.
  • Trigonometric functions are not a major concern to our specific problem domain.
  • Nested loops where not analyzed.

In order to address the major issues, we evaluated these technologies:
  • FastUtil package which offers Colletions of primitive types. This is where Java code can beat C++ code due to the way C++ handles pointers as opposed to the way Java handles arrays.
  • JAL package which mimics most of C++ STL functionality, providing fast algorithms on arrays of primitive types, which perform much faster than arrays of Objects.
  • Colt package which is an excellent class library used by CERN. In particular Colt has a modified versions of JAL in it.
  • Parallel Colt package is a re-implementation of Colt package and aims to take advantage of modern multi-core CPUs.

See also:



If you found this article useful, it will be much appreciated if you create a link to this article somewhere in your website. Thanks

[ First published by Richard Gomes on 14:24, 10 February 2008 (UTC)

Handling of float point rounding errors

In this article we demonstrate that a very elementary mathematical statement can raise big concerns for numerical applications.

Reasoning


Try this simple yet tricky code:
    System.out.println(0.1 + 0.1 + 0.1);

Oh well... the result is: 0.30000000000000004
What??? So... 0.1 + 0.1 + 0.1 is not 0.3 ???

Problem


This problem arises the way floating point numbers are represented in the processor and how they participate in floating point calculations.
Notice that the comparison if (x==0.3) jumped to the else branch. After the sum, the result was 0.30000000000000004 and not 0.3 as we expected. The behaviour of the if statement is correct. In fact, the error is located between the keyboard and the chair: you simply cannot do such comparison!

You have to:
  • remember that floating point errors may happen;
  • evaluate the epsilon associated to the operations you previously done;
  • you have to consider a certain range in your comparisions
like this:
epsilon = blah blah blah; // calculate epsilon somehow
    if ((x>=0.3-epsilon) && (x<=0.3+epsilon)) ...

Solution


JQuantLib takes the same approach as QuantLib. It calculates epsilon after a sequence of mathematical operations which gives us the order of magnitude of the error.


If you found this article useful, it will be much appreciated if you create a link to this article somewhere in your website. Thanks

[ First published by Richard Gomes on 21:47, 27 January 2008 (UTC) ]

Use CachedRowSet and get rid of low level SQL

The interface CachedRowSet provides interesting mechanisms for one willing to work with JDBC with ease. A CachedRowSet is basically a matrix in memory which maps to objects in your database, like this:

  • you have as much columns as fields in a given database table
  • you have as much rows as number of records in given a database table
  • the idea applies to views and joined tables as well

Instead of sending SQL commands to the database, you change cells in this matrix and, when you have finished all your changes in memory, you ask the CachedRowSet to do the dirty work for you. Example:

    // update current row
    crs.updateShort("Age", 58);
    crs.updateInt("Salary", 150000);
    crs.updateRow();

    // insert new row
    crs.moveToInsertRow();
    crs.updateString("Name", "John Smith");
    crs.updateInt("Age", 42);
    crs.updateShort("Salary", 78000);
    crs.insertRow();
    crs.moveToCurrentRow();

    // update the underlying database
    crs.acceptChanges();

The CachedRowSet implementation is a refined piece of software which takes care of all details related to JDBC programming. You not only have something more convenient than coding SQL commands by hand, but you also have stable software at your service doing quality work in the right way it should be done.

In addition, a CachedRowSet is meant to be used disconnected from the database most of the time. After you load a CacheRowSet with data from the database, you can serialize it, send to the web browser, get it back modified and only after that you call acceptChanges. You can also build a CachedRowSet from scratch, populate it by hand and call acceptChanges.

More info: CachedRowSet Javadoc API


CachedRowSet on Java7


Java7 ships with a RI (reference implementation) of interface CachedRowSet. Just use it as you would with any other class or interface. Have fun!


CachedRowSet on Java5 and Java6

There are only interfaces as part of the JDK. You will have to download the RI (reference implementation) by hand. You can download it from here:
JDBC Rowset 1.0.1 JWSDP 1.4 Co-Bundle MREL

Uncompress and upload rowset.jar into your artifact repository exactly like this: javax.sql.rowset:rowset:1.0.1

$ ls jdbc_rowset_tiger-1_0_1-mrel-jwsdp.zip
$ unzip jdbc_rowset_tiger-1_0_1-mrel-jwsdp.zip
$ ls jdbc-rowset/lib/rowset.jar
$ firefox jdbc-rowset/docs/index.html

Note: You can find "rowset" in Jarvana and alike, but that dependency will not help you since it refers to a pom.xml file, not the rowset.jar file you are interested. There's no rowset.jar file stored in Maven Central due to licensing issues.

Saturday, 5 January 2013

A NEO will destroy us?

Just by coincidence, the name of this blog site "Notes Experiences and Opinions" produces an acronym which also matches "Near Earth Object", which I find an interesting subject, in particular after the cancelled doomsday, allegedly predicted by the Mayan calendar.

A NEO (Near Earth Object) is a celestial body which orbits closer enough to Earth and which is big enough to threaten Earth in a regional or global scale. An example is an event which happened in Tunguska in 1908, which caused a big devastation due an explosion of something between 5 and 30 megatons.

If you are disappointed with the Doomsday which was a complete fiasco, you might be pleased to know that there's an asteroid which will approach us very very closely next year, more exactly on 16/Feb/2013. It will flyby 45,000km from Earth, which is approximately 10 times closer to us than the Moon is (see orbit). Despite very close, it will not collide to us.

This sort of near miss is relatively common but collisions are very, very rare. For the future, calculations show that we are not at risk for centuries to come.

If you like Astronomy you may also consider supernovas or other mighty celestial bodies emitting lethal gamma rays and so on. Don't worry: a supernova would be a threat to us only if closer than some 25ly (light-years) from us. As far as we know, there are no candidates for such cataclysm sufficiently near to us. Our Sun will become a nova some day and will certainly destroy this planet... but don't worry: it still will take 5 billion years to happen.

Looks like there's no astronomical cataclysms which can offer any danger to this planet anytime soon. In this case, we are forced to face a very simple and hard reality: we are still the biggest threat to this planet.

I wanna tell you why I dislike Ubuntu

I'm sure this post will cause commotion ... but I don't like Ubuntu. Let me explain why.

At a certain point in my life, I've decided that it was time to throw away Windows definitely. Linux was sufficiently mature, there was sufficiently good office productivity tools, which was the biggest concern to me at that point. I'm user of Unix systems since 1986, including several Linux distributions since 1994.  I've banned Windows from my life in 2004 and I'm using Linux all the time on all computers I own ever since. Apparently to me, Ubuntu was the best option for a Linux distribution at that point, once it had recent good enough versions of office productivity tools, browsers and mail readers.

I remember that, in 2005, I did a system upgrade. Well, after that I was not able to connect to the Internet anymore. Pretty bad! After spending a couple of days researching what the problem was (thanks to a laptop at hand!), I've finally discovered that my network card was not supported by the Linux kernel anymore. Actually, it was a bug, a regression. After evaluating available options and effort involved for each option, I've decided that I should simply buy and install a supported network card.

A couple of months later and another system upgrade. This time the printer stopped working. Long story short, two more days wasted trying to find a way to circumvent the difficulty. I don't remember anymore what the solution was, but I remember that my wife complained a lot because the printer was not working anymore.

I certainly recognize the importance of Ubuntu for popularity of Linux. Thanks to Canonical, Linux gained a lot of momentum and became a relatively popular operating system to non-initiated end-users. Before, Linux was only intended to nerds.

Because Ubuntu evolves quickly, many times I've installed it again, in particular on laptops which I can quickly reinstall everything if needed, without taking the risk of losing important stuff. At the moment I have Ubuntu installed on a workstation too, so that I can do some software development work on GPUs which specifically depend on Ubuntu, among some other distributions. Evolution and innovation demand an operating system (or a distribution) which also evolves quickly.

But I had bitter experiences. Not only those two I've described above, but several others. The "upgrade-and-break-stuff" thing happened many, many times. The last time was just two months ago, when I was playing with XBMC on a laptop, taking TV feed from my workstation: after the upgrade, it became impossible to watch TV again over the network.

I definitely prefer Debian. I use it on production systems. Debian is rock-solid stable, secure and reliable. In 7 years I'm using Debian everyday, I've never had a single problem of regression. I can blindly upgrade packages or even the entire version of Debian without any trouble: Debian simply works.

Update:  I've got rid of Ubuntu, definitely. I'm now using Debian Wheezy and compiling from sources some tools I need in order to go ahead with software development work on GPUs.

Python stack for GPU programming

I've recently started to play with Copperhead, which is a research project by Nvidia which provides GPU programming in a very elegant way on top of Python language.

If you are interested on CUDA/GPU development, I recommend you stick with Nvidia requirements, in particular regarding specific distributions which are supported by Nvidia CUDA Toolkit. Since I don't like Ubuntu, I've tried to install the aforementioned toolkit on Debian, but I faced troubles. In the end, I've built a separate box specially for CUDA/GPU development, based on Ubuntu. My environment consists on:
  • Kubuntu 11.10 ( which is basically Ubuntu 11.10 + KDE4 )
  • Nvidia CUDA Toolkit 5.0 
Long story short, I've decided to build Python and several libraries from sources, since their versions were somewhat old or inconvenient for my purposes. I'm building the stack below from sources:
  • Python 2.7.3
  • Numpy 1.6.2
  • Scypy 0.11
  • HDF5 1.8.10
  • PyTables 2.4.0
  • ViTables 2.1
  • Pandas 0.9.1
  • Copperhead trunk
  • recent versions of IPython and Cython
The strategy I employed consists on:
  1. build Python 2.7.3
  2. install virtualenv and virtualenvwrapper on top of Python 2.7.3
  3. install everything else on top of a virtual environment
You can see the entire script below:


#!/bin/bash -x

####
#
# We will install this stack onto /opt/local
#
# Notice that I've installed NVidia CUDA5 Toolkit on
# /opt/local/cuda but, for some reason, Copperhead seems to
# require it from /usr/local/cuda. I've circumvented the trouble by
# simply creating a symlink, as you can see in function
# build_copperhead.
#
#####

export PREFIX=/opt/local

function create_opt_local() {
  rm -r -f ~/.virtualenvs
  rm -r -f ${PREFIX}/bin ${PREFIX}/include ${PREFIX}/lib ${PREFIX}/share
  sudo mkdir -p ${PREFIX}/bin ${PREFIX}/include ${PREFIX}/lib ${PREFIX}/share
  sudo chown -R rgomes:rgomes ${PREFIX}
}

#--------------------------------------------------
# essential tools
#--------------------------------------------------
function install_essential_tools() {
  sudo apt-get install zip unzip bzip2 gzip xz-utils wget curl -y
  sudo apt-get install build-essential gcc g++ gfortran -y
  sudo apt-get install autoconf scons pkg-config -y
  sudo apt-get install git bzr mercurial -y
}

#--------------------------------------------------
# downloading everything
#--------------------------------------------------
function download_everything() {
  mkdir -p ~/Downloads
  pushd ~/Downloads
  if [ ! -f install_python_environment.done ] ;then
    wget http://sourceforge.net/projects/boost/files/boost/1.48.0/boost_1_48_0.tar.bz2
    wget http://thrust.googlecode.com/files/thrust-1.6.0.zip
    wget http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tar.xz
    wget http://downloads.sourceforge.net/project/numpy/NumPy/1.6.2/numpy-1.6.2.tar.gz
    wget http://downloads.sourceforge.net/project/scipy/scipy/0.11.0/scipy-0.11.0.tar.gz
    wget http://www.hdfgroup.org/ftp/HDF5/current/src/hdf5-1.8.10.tar.bz2
    wget http://www.hdfgroup.org/ftp/lib-external/szip/2.1/src/szip-2.1.tar.gz
    wget http://downloads.sourceforge.net/project/pytables/pytables/2.4.0/tables-2.4.0.tar.gz
    wget http://vitables.googlecode.com/files/ViTables-2.1.tar.gz
    wget http://pypi.python.org/packages/source/p/pandas/pandas-0.9.1.tar.gz
    touch install_python_environment.done
  else
    echo "Files already downloaded"
  fi
  popd
}

#--------------------------------------------------
# build Python from sources
#--------------------------------------------------
function build_python() {
  cd ~/sources
  sudo apt-get install libicu44 libicu-dev zlib1g-dev libbz2-dev libncurses5-dev libreadline-gplv2-dev libsqlite3-dev libssl-dev libgdbm-dev -y
  #--
  tar xf ~/Downloads/Python-2.7.3.tar.xz
  cd Python-2.7.3
  export CC=gcc
  export CXX=g++
  export LD_RUN_PATH=${PREFIX}/lib
  ./configure --prefix=${PREFIX} --enable-shared --enable-unicode=ucs4 --with-pydebug
  make -j 8
  #make test
  make install
  unset CC CXX LD_RUN_PATH
}

#--------------------------------------------------
# install package managers
#--------------------------------------------------
function install_package_managers() {
  cd ~/sources
  curl http://python-distribute.org/distribute_setup.py | ${PREFIX}/bin/python
  curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | ${PREFIX}/bin/python
}

#--------------------------------------------------
# install virtualenv and virtualenvwrapper
#--------------------------------------------------
function install_virtualenv() {
  cd ~/sources
  sudo apt-get install python-virtualenv virtualenvwrapper -y
}

#--------------------------------------------------
# virtual environment for Python 2.7
#--------------------------------------------------
function create_virtualenv_py27() {
  mkvirtualenv --no-site-packages --python=${PREFIX}/bin/python py27
  #--
  echo "export PATH=${PREFIX}/bin:${PREFIX}/cuda/bin:$PATH" >> ~/.virtualenvs/py27/bin/postactivate
  echo "export LD_LIBRARY_PATH=${PREFIX}/lib" >> ~/.virtualenvs/py27/bin/postactivate
}

#--------------------------------------------------
# install other packages with pip
#--------------------------------------------------
function install_packages_1() {
  pip install setuptools --upgrade
  pip install distribute --upgrade
  pip install Sphinx
  pip install Cython
  pip install tornado
  pip install pyzmq
  pip install ipython
  pip install uncertainties
}

#--------------------------------------------------
# build numpy from sources
#--------------------------------------------------
function build_numpy() {
  cd ~/sources
  sudo apt-get install libatlas-base-dev libblas-dev libatlas-base-dev -y
  tar xf ~/Downloads/numpy-1.6.2.tar.gz
  cd numpy-1.6.2
  rm -r -f build
  python --version
  python setup.py --requires
  python setup.py build
  python setup.py install
}

#--------------------------------------------------
# build scipy from sources
#--------------------------------------------------
function build_scipy() {
  cd ~/sources
  tar xf ~/Downloads/scipy-0.11.0.tar.gz
  cd scipy-0.11.0
  rm -r -f build
  python --version
  python setup.py --requires
  python setup.py build
  python setup.py install
}

#--------------------------------------------------
# install other packages with pip
#--------------------------------------------------
function install_packages_2() {
  pip install numexpr
}

#--------------------------------------------------
# build szip from sources
#--------------------------------------------------
function build_szip() {
  cd ~/sources
  sudo apt-get install zlib-bin -y
  #--
  tar xf ~/Downloads/szip-2.1.tar.gz
  cd szip-2.1
  ./configure --prefix=${PREFIX} --enable-encoding
  make clean
  make -j 8
  make install
}

#--------------------------------------------------
# build HDF5 from sources - requires szip
#--------------------------------------------------
function build_hdf5() {
  cd ~/sources
  sudo apt-get build-dep hdf5 -y
  tar xf ~/Downloads/hdf5-1.8.10.tar.bz2
  cd hdf5-1.8.10
  CC=/usr/bin/mpicc ./configure --prefix=${PREFIX} --enable-shared --enable-parallel --with-szlib=${PREFIX}
  make clean
  make -j 8
  make check
  make install
  make check-install
  make clean
}

#--------------------------------------------------
# build pytables from sources
#--------------------------------------------------
function build_pytables() {
  cd ~/sources
  sudo apt-get install libbz2-dev liblzo2-dev -y
  #--
  tar xf ~/Downloads/tables-2.4.0.tar.gz
  cd tables-2.4.0/
  python --version
  python setup.py --requires
  CC=/usr/bin/mpicc HDF5_DIR=${PREFIX} python setup.py install
}

#--------------------------------------------------
# build vitables from sources
#--------------------------------------------------
function build_vitables() {
  cd ~/sources
  tar xf ~/Downloads/ViTables-2.1.tar.gz
  cd ViTables-2.1/
  python --version
  python setup.py --requires
  python setup.py build
  python setup.py install
}

#--------------------------------------------------
# build pandas from sources
#--------------------------------------------------
function build_pandas() {
  cd ~/sources
  tar xf ~/Downloads/pandas-0.9.1.tar.gz
  cd pandas-0.9.1
  rm -r -f build dist
  python --version
  python setup.py --requires
  python setup.py build
  python setup.py install
}

#--------------------------------------------------
# build boost from sources
#--------------------------------------------------
function build_boost() {
  cd ~/sources
  tar xf ~/Downloads/boost_1_48_0.tar.bz2
  cd boost_1_48_0
  #-- install on ${PREFIX}
  mkdir -p ${PREFIX}
  #--
  ./bootstrap.sh --prefix=${PREFIX} --with-python=${PREFIX}/bin/python --with-icu
  ./b2 -j 8
  ./b2 install
}

#--------------------------------------------------
# install thrust library
#--------------------------------------------------
function build_thrust() {
  pushd ${PREFIX}/include
  unzip ~/Downloads/thrust-1.6.0.zip
  popd
}

#--------------------------------------------------
# build copperhead from sources
#--------------------------------------------------
function build_copperhead() {
  cd ~/sources
  git clone http://github.com/copperhead/copperhead.git
  cd copperhead
  #--
  rm -r -f dist stage .sconf_temp .sconsign.dblite config.log
  rm -r -f ${PREFIX}/lib/python2.7/site-packages/copperhead*
  #--
  sudo rm /usr/local/cuda
  sudo ln -s ${PREFIX}/cuda /usr/local/cuda
  sudo ldconfig /usr/local/cuda/lib64
  #--
cat << EOD > siteconf.py
BOOST_INC_DIR = "${PREFIX}/include"
BOOST_LIB_DIR = "${PREFIX}/lib"
BOOST_PYTHON_LIBNAME = "boost_python"
CUDA_INC_DIR = "/usr/local/cuda/include"
CUDA_LIB_DIR = "/usr/local/cuda/lib64"
NP_INC_DIR = "${PREFIX}/lib/python2.7/site-packages/numpy/core/include"
TBB_INC_DIR = None
TBB_LIB_DIR = None
THRUST_DIR = "${PREFIX}/include"
EOD
  cat siteconf.py
  #--
  python --version
  python setup.py install
}


#--------------------------------------------------
# main script
#--------------------------------------------------

create_opt_local
install_essential_tools

download_everything

build_python
install_package_managers

install_virtualenv
source /etc/bash_completion.d/virtualenvwrapper
create_virtualenv_py27
workon py27

install_packages_1
build_numpy
build_scipy
install_packages_2
build_szip
build_hdf5
build_pytables
build_vitables
build_pandas
build_boost
build_thrust
build_copperhead

echo "done."