Friday 29 May 2015

NetBeans

I’ve just returned from the first NetBeans Day conference in London and I have some impressions to share.

A little bit of background

I am a happy user of NetBeans and I’ve been for quite some while. I use it routinely to develop my personal projects (EnumJ is one and there is another one that I don’t disclose yet) and this last conference opened new doors.
Disclaimer: the dispute “which editor is best” is a type of religious war that rarely brings any good. So, my intent here is not such a dispute. I do not believe in relativism, I don’t consider that all things are the same or that all views are of equal value. However, the choice of editor really does not make a difference: the best editor or IDE for a developer is the one that makes him or her happy and productive.
That being said, I’ll tell why NetBeans makes me happy.

Why NetBeans

My love affair with IDEs started with Turbo Pascal 2.0, more than two decades ago. Over the years I’ve used for Java both IntelliJ and Eclipse. For Python I used Eclipse and the most charming PyCharm.

I also love Vim and I’ve never used Emacs. I’ve been, for more than a decade, an adherent to Visual SlickEdit (popular within Microsoft at a time, allegedly Dave Cutler‘s favourite). I purchased (as a Microsoft Alumnus) almost all the editions of Visual Studio over the years. I tried SharpDevelop a couple of times but I found it immature and I’ve never used MonoDevelop.

All the editors and IDEs that I used have their merits (including IntelliJ and Eclipse, competitors to NetBeans), but NetBeans makes me tick because:
  • It is free. This is slightly better than IntelliJ. OBS: I don’t imply that paying for IntelliJ is a waste of money; I just find it more compelling to program in a free IDE on a free platform like Java.
  • It is simple. It does everything I need without unnecessary complexity.
  • It is standard. This is a very important reason for me. The fact that it had strong support for latest Java out of the box made me stick with it.
  • It has unparalleled support for Maven (very important to me) and Java Enterprise (less important for the moment, but this may change).
  • It is a platform in itself and this is related to the content of the conference I’ve just attended.
No wonder that I was quite expecting the very first NetBeans Day conference in London.

NetBeans Day

It was a whole-day conference, kindly hosted on the University of Greenwhich‘ campus. Food provided (nothing gourmet, to be sure, according to the well established tradition of geeky events like OJC, for example). Talks of unequal level mainly because some talks were not very related to NetBeans, or NetBeans’ extensibility - arguably people’s most important reason to attend.

However, three highlights make me warmly recommend the event:
  • opening talk by Geertjan Wielenga which showed what NetBeans can do for you – nice even though I am not a fan of shortcuts and editing tricks (after years of assembly in kdbg1), I want to rejoyce that mice exist)
  • talk on evolution from BlueJ and Greenfoot to NetBeans with emphasis on education – important for a father with children like me
  • closing talk by Geertjan Wielenga on how to program solutions on NetBeans platform – important for a “lazy” GUI developer like me
The last point hit a nerve: although all the IDEs I know exhibit extensibility in one fashion or another, it seems to me that NetBeans has a more general type of extensibility - which is not confined to adding new programming languages or programming productivity features. It looks as if in NetBeans you can program anything - Geertjan even showed us an application in farming!

Conclusion

Although not a perfect event (understandable for a first edition), the NetBeans Day in London is definitely not to miss. I am looking forward to the next edition which, according to the organizers, may happen later this year.

1) The Windows Kernel Debugger, part of the DDK. Not to be confused with KDE GNU Debugger.

Saturday 16 May 2015

Design Pattern: fast concurrency with detached state

The basic equation of multi-threading can be summarised as follows:
Concurrency + Mutability = Performance Penalty + Error Proneness
which is a direct result of context switches and the explicit management of state consistency in the presence of concurrency.
In this article I’ll show a design pattern where the equation changes into:
Concurrency + Mutability = Efficiency + Simplicity
Disclaimer: this pattern almost surely exists somewhere else. Like many good things in life, it can bear the burden of many parents. I mention it here because it is related to my previous article.

Example

Let’s consider a class named Thing with properties volume and mass and another property named density derived from the previous two. Because it’s very hard to calculate density from mass and volume (tongue-in-cheek), its value needs to be cached so that we don’t repeat the complex calculation density = mass / volume every single time.
In other words, the property values are correlated and any object of type Thing must be consistent. The “canonical” way to maintain consistency in presence of concurrency is by enclosing state changes in synchronized blocks:

Tuesday 12 May 2015

Caching enumerations: the internals


In the previous post I wrote about the correct implementation for caching enumerables over large sets that involve a large number of compositions.
In this article I will give implementation details.

Two ways of being lazy

Lazy evaluation is in the bone and marrow of the EnumJ library. Caching enumeration is no exception: the internal caching buffer grows lazily, as the consuming enumerators require.
EnumJ has two classes that help lazy evaluation:

Lazy<T>

Lazy<T> is an implementation of LazyInitializer<T> that takes a supplier and overrides initialize() to call the supplier when initializing the object. The value is initialized only once even under multi-threading and it is fast once initialization is carried out.
In addition, Lazy<T> ensures that the supplier gets released upon initialization. The code is quite simple:

Saturday 9 May 2015

Caching enumerations


Enumerators are particularly well suited to specify sets of elements via composing constraints, generators or any combination thereof. Enumerators have a declarative flavour that almost matches Mathematics. For example, a set definition like this one:
A = { x in N | 0 <= x <= 10 and x is even }
can nicely be expressed as:
Enumerator<E> e = Enumerator.rangeInt(0, 11).filter(x –> 0 == x % 2);
or, in Enumerable parlance:
Enumerable<E> e = Enumerable.rangeInt(0, 11).filter(x –> 0 == x % 2);
the difference being that an Enumerable<E> can participate to for(each) statements and can be enumerated upon more than once.

The problem

All is great for a few compositions, but enumerators are designed to support large numbers of compositions, in the range of hundreds of thousands or while memory lasts. The sheer existence of that many operations imply that a single original element undergoes hundreds of thousands of transformations before coming out as an enumerated result.
There is nothing we can do: if the operations are really needed then they must be carried out, period.
The problem is when we have to enumerate repeatedly over a sequence that doesn’t change: if the transformations have no side effects, then re-computing the same result means that hundreds of thousands of transformations are being applied uselessly over and over. Such a waste!
The solution comes in the form of the age-old method of caching.

The solution

Caching, when done carefully, saves a huge deal of time by:
  • storing locally what is remote
  • storing in memory what is on slow media
  • storing for fast retrieval what is computed too slowly
Our case falls in the third category. Before showing implementation details, let us first overview a couple of sub-optimal solutions.

Saturday 2 May 2015

Shareability

The idleness of the past holiday weeks brought to me a much needed insight: how important scalable shareability is for Enumerable<T> (if such an interface were to be of any use for high-compositionality scenarios).

What is shareability?

It is the ability to be shared as a separate “image”, while each “shared” image retains the properties of the original. In the case of Enumerable<T> this means participating to multiple operational compositions and still doing the expected job efficiently in terms of computation as well as memory and stack consumption.
Here is a simple example in which an enumerable named e is shared between two independent compositions:

Enumerable<T> e = ... some code ... Enumerable<U> uE = e.map(t2uMapper); Enumerable<V> vE = e.map(t2vMapper);
for(U u: uE) {
 
System.out.println("My Us: " + u + " ..."); } for(V v: vE) {
 
System.out.println("My Vs: " + v + " ..."); }
The goal is to ensure that both loops work properly and, more importantly, the whole thing is scalable: if we have a chain of hundreds of thousands of operations peppered with thousands of points where enumerables are being shared between independent pipelines, all the resulting enumerables (which may come and go at runtime and number in the range of millions or more) still work properly, efficiently and do not overflow the stack (assuming we have enough heap).

What is the support for it?

Before discussing how to implement this in the EnumJ library, let us see what support we have out there for such a thing.