(Quick Reference)

10 Tips - Reference Documentation

Authors: The Whole GPars Gang

Version: 1.0.0

10 Tips

General GPars Tips

Grouping

High-level concurrency concepts, like Agents, Actors or Dataflow tasks and operators can be grouped around shared thread pools. The PGroup class and its sub-classes represent convenient GPars wrappers around thread pools. Objects created using the group's factory methods will share the group's thread pool.

def group1 = new DefaultPGroup()
def group2 = new NonDaemonPGroup()

group1.with { task {...} task {...} def op = operator(...) {...} def actor = actor{...} def anotherActor = group2.actor{...} //will belong to group2 def agent = safe(0) }

When customizing the thread pools for groups, consider using the existing GPars implementations - the DefaultPool or ResizeablePool classes. Or you may create your own implementation of the groovyx.gpars.scheduler.Pool interface to pass to the DefaultPGroup or NonDaemonPGroup constructors.

Java API

Most of GPars functionality can be used from Java just as well as from Groovy. Checkout the 2.6 Java API - Using GPars from Java section of the User Guide and experiment with the maven-based stand-alone Java demo application . Take GPars with you wherever you go!

10.1 Performance

Your code in Groovy can be just as fast as code written in Java, Scala or any other programing language. This should not be surprising, since GPars is technically a solid tasty Java-made cake with a Groovy DSL cream on it.

Unlike in Java, however, with GPars, as well as with other DSL-friendly languages, you are very likely to experience a useful kind of code speed-up for free, a speed-up coming from a better and cleaner design of your application. Coding with a concurrency DSL will give you smaller code-base with code using the concurrency primitives as language constructs. So it is much easier to build robust concurrent applications, identify potential bottle-necks or errors and eliminate them.

While this whole User Guide is describing how to use Groovy and GPars to create beautiful and robust concurrent code, let's use this chapter to highlight a few places, where some code tuning or minor design compromises could give you interesting performance gains.

Parallel Collections

Methods for parallel collection processing, like eachParallel() , collectParallel() and such use Parallel Array , an efficient tree-like data structure behind the scenes. This data structure has to be built from the original collection each time you call any of the parallel collection methods. Thus when chaining parallel method calls you might consider using the map/reduce API instead or resort to using the ParallelArray API directly, to avoid the Parallel Array creation overhead.

GParsPool.withPool {
    people.findAllParallel{it.isMale()}.collectParallel{it.name}.any{it == 'Joe'}
    people.parallel.filter{it.isMale()}.map{it.name}.filter{it == 'Joe'}.size() > 0
    people.parallelArray.withFilter({it.isMale()} as Predicate).withMapping({it.name} as Mapper).any{it == 'Joe'} != null
}

In many scenarios changing the pool size from the default value may give you performance benefits. Especially if your tasks perform IO operations, like file or database access, networking and such, increasing the number of threads in the pool is likely to help performance.

GParsPool.withPool(50) {
    …
}

Since the closures you provide to the parallel collection processing methods will get executed frequently and concurrently, you may further slightly benefit from turning them into Java.

Actors

GPars actors are fast. DynamicDispatchActors and ReactiveActors are about twice as fast as the DefaultActors , since they don't have to maintain an implicit state between subsequent message arrivals. The DefaultActors are in fact on par in performance with actors in Scala , which you can hardly hear of as being slow.

If top performance is what you're looking for, a good start is to identify the following patterns in your actor code:

actor {
    loop {
        react {msg ->
            switch(msg) {
                case String:…
                case Integer:…
            }
        }
    }
}
and replace them with DynamicDispatchActor :
messageHandler {
    when{String msg -> ...}
    when{Integer msg -> ...}
}

The loop and react methods are rather costly to call.

Defining a DynamicDispatchActor or ReactiveActor as classes instead of using the messageHandler and reactor factory methods will also give you some more speed:

class MyHandler extends DynamicDispatchActor {
    public void handleMessage(String msg) {
        …
    }

public void handleMessage(Integer msg) { … } }

Now, moving the MyHandler class into Java will squeeze the last bit of performance from GPars.

Pool adjustment

GPars allows you to group actors around thread pools, giving you the freedom to organize actors any way you like. It is always worthwhile to experiment with the actor pool size and type. FJPool usually gives better characteristics that DefaultPool , but seems to be more sensitive to the number of threads in the pool. Sometimes using a ResizeablePool or ResizeableFJPool could help performance by automatic eliminating unneeded threads.

def attackerGroup = new DefaultPGroup(new ResizeableFJPool(10))
def defenderGroup = new DefaultPGroup(new DefaultPool(5))

def attacker = attackerGroup.actor {...} def defender = defenderGroup.messageHandler {...} ...

Agents

GPars Agents are even a bit faster in processing messages than actors. The advice to group agents wisely around thread pools and tune the pool sizes and types applies to agents as well as to actors. With agents, you may also benefit from submitting Java-written closures as messages.

Share your experience

The more we hear about GPars uses in the wild the better we can adapt it for the future. Let us know how you use GPars and how it performs. Send us your benchmarks, performance comparisons or profiling reports to help us tune GPars for you.

10.2 Integration into hosted environment

Hosted environments, such as Google App Engine, impose additional restrictions on threading. For GPars to integrate with these environments better, the default thread factory and timer factory can be customized. The GPars_Config class provides static initialization methods allowing third parties to register their own implementations of the PoolFactory and TimerFactory interfaces, which will then be used to create default pools and timers for Actors, Dataflow and PGroups.

public final class GParsConfig {
    private static volatile PoolFactory poolFactory;
    private static volatile TimerFactory timerFactory;

public static void setPoolFactory(final PoolFactory pool)

public static PoolFactory getPoolFactory()

public static Pool retrieveDefaultPool()

public static void setTimerFactory(final TimerFactory timerFactory)

public static TimerFactory getTimerFactory()

public static GeneralTimer retrieveDefaultTimer(final String name, final boolean daemon) }

The custom factories should be registered immediately after the application startup in order for Actors and Dataflow to be able to use them for their default groups.

Compatibility

Some further compatibility problems may occur when running GPars in a hosted environment. The most noticeable one is probably the lack of ForkJoinThreadPool (aka jsr-166y) support in GAE. Functionality such as Fork/Join and GParsPool may thus not be available on some services as a result. However, GParsExecutorsPool, Dataflow, Actors, Agents and Stm should work normally even when using managed non-Java SE thread pools.