SyntaxHighlighter

Friday, August 28, 2015

Ordered Scheduler

Background


The LMAX's Disruptor is a great piece of software engineering and may be used for many use cases. However as I showed in my previous article on the cost of signalling, multiplying this kind of structure can lead to scheduler contention because we increase the number of threads that may request to the OS scheduler a CPU time slice.

At Ullink we faced this issue for a very concrete case. This case requires keeping ordering while trying to parallelize tasks. In the meantime we would like to reduce the number of threads involved in our application.

Disruptor could resolve our case, but at the price of a consumer thread for each instance. As we may have hundreds of this pattern for a running application, we will increase significantly the number of threads and therefore the pressure on the scheduler.

Georges Gomes, CTO of Ullink,  find out in an intel's article [1] an alternative approach for solving our problem. This article talks about the following issue: How to parallelize some tasks and keeping ordering in the mean time ?

Here is an example: we have a video stream and we want to re-encode it. We read from the input stream each frame and we re-encode it before writing it into an output stream.
Clearly, frame encoding can be parallelized. Each frame is an independent source of data. The problem is we need to keep ordering of the frames otherwise the video stream would not have any sense!


OrderedScheduler1.png

To solve this problem, we need to use an order buffer:

OrderedScheduler1.png

This buffer keeps items until the proper order is respected then write into the output stream.
To ensure the proper ordering, each task will grab a ticket on which the algorithm will order the processing.

Each thread is responsible to execute the algorithm and make sure it is its turn to write into the output stream or not. If not, it leaves the item into the order buffer and this is the threads of the previous items (in order) to take care of item leftover. It means we do not have a dedicated consumer thread. It leads us to the following advantages:


  • No inter-thread communication overhead (signalling: wait/notify, await/signal)
  • No additional thread
  • No wait strategy (spining, back-off, ...)

We can then multiply this kind of structure without the fear to increase significantly the number of threads running in the application. Bonus: we reduce the need for synchronization that was previously required to ensure ordering.

Algorithm

When tasks are created/dispatched we need to keep track of the order in which they were by assigning a ticket to it. This ticket is just a sequence that we keep incrementing.
For the order buffer we will create an ring buffer style structure with tail counter.

OrderedScheduler1.png


When one of the task entered into the structure, we check if its ticket number matches with the tail counter. If it matches we process the task with the current thread entering and we increment tail.
OrderedScheduler2.png

Next, this is the thread containing the ticket number 3 which (by the game of scheduling) comes. Here the ticket number does not match with current tail (2).

OrderedScheduler3.png
We then put at ticket number index (modulo the length of the ring buffer) the task we would like to process into the ring buffer. And we let the thread free to go.

OrderedScheduler4.png
Finally, the thread with ticket number 2 comes. It matches the tail so we can process the task immediately and increment the tail.


OrderedScheduler6.png

We examine if there is still tasks at the next index from the tail. There is the task with ticket number 3 that was left by previous thread, so it is its turn so we reused the current thread to execute the task.

And we're done!

Ordered Scheduler

If you have a code like this:

You need to synchronized to ensure your code is keeping the ordering required when writing to the output.
With Ordered Scheduler you can change your code to:

Synchronized block is still required in this example to ensure the input is attached to right ticket. You could do differently by reading your input into a single thread, taking the ticket and then dispatch into a thread pool your process tasks. Your call to the ordered scheduler will then serialize your output.

What is important to keep in mind is that you cannot miss a ticket. This is why in this example is an exception is thrown during the process we call the trash method to inform the ordered scheduler that ticket is no longer valid, otherwise it would wait for ever that ticket to come into the scheduler.

The implementation is open sourced on GitHub.

References

[1] https://software.intel.com/en-us/articles/exploiting-data-parallelism-in-ordered-data-streams

Monday, July 13, 2015

Notify... oh, wait! I have a signal.

Introduction


When you want to pipeline, delegate some task asynchronously or simply synchronize 2 threads, you usually end up using wait/notify couple (or even await/signal, depending on your taste).

But what is the cost or the overhead for this kind of pattern ?

Under the hood

What happening when we are using wait/notify couple ?
I simplify here to this couple, as the other (await/signal) calls the same set of underlying methods:


LinuxWindows
Object.notifypthread_cond_signalSetEvent
Object.waitpthread_cond_timedwaitWaitForSingleObject

Basically we are performing system calls. For Object.wait we ask the OS scheduler to move the current thread to the wait queue


For Object.notify, we ask the scheduler (via futexes[1] on Linux) to move one of the waiting threads from the wait queue to the run queue to be scheduled when possible.

Just a quick remark about system calls: contrary to the common belief, system calls do not imply context switches[2]. It depends on the kernel implementation. On Linux there is no context switch unless the system call implementation requires it like for IO. In the case of pthread_cond_signal, there is no context switches involved.

Knowing that, what is the cost of calling notify for a producer thread ?

Measure, don't guess!

Why not building a micro-benchmark ? Because I do not care about average latency, I care about outliers, spikes. How it behaves for 50, 90, 95, 99, 99.9 % of the time.  What may be the maximum I can observe?
Let's measure it with HdrHistorgram from Gil Tene and the following code:



This code basically creates n pairs of threads: one (critical) which trying to notify the second (flushing) that data are available to be processed (or flushed).
I run this code with following parameters 16 1000. It means that we have 16 pairs of threads that doing wait/notify.

Results on Windows (ns):
count: 16000
min: 0
max: 55243
mean: 549.5238125
50%: 302
90%: 1208
95%: 1812
99%: 3019
99.9%: 11472




Results on Linux (ns):
count: 16000
min: 69
max: 20906
mean: 1535.5181875
50%: 1532
90%: 1790
95%: 1888
99%: 2056
99.9%: 3175



So most of the time we can observe couple of microseconds for a call to notify. But in some cases we can reach 50us! For Low Latency systems it can be an issue and a source of outliers.

Now, if we push a little our test program to use 256 pairs of threads we end up with the following results:

Results on Windows (ns):
count: 256000
min: 0
max: 1611088
mean: 442.25016015625
50%: 302
90%: 907
95%: 1208
99%: 1811
99.9%: 2717

Results on Linux (ns):
count: 256000
min: 68
max: 1590240
mean: 1883.61266015625
50%: 1645
90%: 2367
95%: 2714
99%: 7762
99.9%: 15230

A notify call can take 1.6ms!

Even though there is no contention in this code per se, there is another kind of contention that happens in the kernel. Scheduler needs to arbitrate which thread can be run. Having 256 threads that trying to wake up their partner thread put a lot of pressure on the scheduler which become a bottleneck here.

Conclusion

Signaling can be a source of outliers not because we have contention on executing code between threads but because the OS scheduler needs to arbitrate among those threads, responding to wake up requests.

References

[1] Futex are tricky U. Drepper: http://www.akkadia.org/drepper/futex.pdf
[2] http://en.wikipedia.org/wiki/System_call#Processor_mode_and_context_switching

Tuesday, July 7, 2015

WhiteBox API

I have already seen this in JCStress but this is in a post from Rémi Forax on mechanical sympathy forum that it brings attention to me when I saw what is possible to do with it. Here is a summary:




This API is usable since JDK8 and there is some new additions in JDK9.

But how to use it ?
This API is not part of the standard API but in the test library from OpenJDK. you can find it here.
Download the source of OpenJDK then either you build it entirely and grab the wb.jar or

  1. go to test/testlibrary/whitebox directory
  2. javac -sourcepath . -d . sun\hotspot\**.java
  3. jar cf wb.jar .

Place you wb.jar next to your application and launch it with:

java -Xbootclasspath/a:wb.jar -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI ...

Here is an examble you can run with WhiteBox jar:


For this example you need to add -XX:MaxTenuringThreshold=1 to make it work as expected.
Now you have an API to trigger minor GC and test if an object resides in Old generation, pretty awesome !

You can also trigger JIT compilation on demand for some methods and change VM flags on the fly:



Unlike unsafe, this API seems difficult to use in production environment, but at least you can have fun in labs or adding this like the OpenJDK for your low level tests. Enjoy!

Tuesday, May 26, 2015

Volatile and memory barriers

I have already blogged on the effect of a volatile variable on optimization performed by the JIT. But what is the really difference between a regular variable ? And what are the impacts in terms of performance ?

Semantic


Volatile has a well defined semantic in Java Memory Model (JMM), but to summarize, it has the following:
  • Cannot be reordered
  • Ensure data visibility to other threads

Visibility


Doug Lea refers thread visibility to flushing caches, however, as pointed out by Martin Thompson in his post, volatile access does not flush the cache for visibility (as in writing data to memory to make it visible for all cores).
ccNUMA architecture means that all data in cache subsystem are in fact coherent with main memory. So the semantic of volatile apply through the Load/Store buffers (or Memory Ordering Buffers) placed between registers and L1 cache.

Depending on CPU architecture/instructions set, generated instructions to ensure those properties can vary. Let's focus on x86 which is the most wide spread.

Reordering


For reordering there is 2 kinds:
  • Compiler
  • Hardware/CPU
Compiler is able to reorder instructions during instruction scheduling to match cost of loading or storing data with the CPU specifications.
It could be interesting to trigger 2 loads without dependencies between each other in order to optimize the time spent by the CPU to wait for those data and perform other operations in the mean time.

In the following example I have put 6 non-volatile fields :

public class TestJIT
{
    private static int field1;
    private static int field2;
    private static int field3;
    private static int field4;
    private static int field5;
    private static int field6;
    
    private static void assign(int i)
    {
        field1 = i << 1;
        field2 = i << 2;
        field3 = i << 3;
        field4 = i << 4;
        field5 = i << 5;
        field6 = i << 6;
    }

    public static void main(String[] args) throws Exception
    {
        for (int i = 0; i < 10000; i++)
        {
            assign(i);
        }
        Thread.sleep(1000);
    }
}


Let's examine what is generated by the JIT for the method assign:

  # {method} 'assign' '(I)V' in 'com/bempel/sandbox/TestJIT'
  # parm0:    ecx       = int
  #           [sp+0x10]  (sp of caller)
  0x02438800: push   ebp
  0x02438801: sub    esp,0x8            ;*synchronization entry
                                        ; - com.bempel.sandbox.TestJIT::assign@-1 (line 26)
  0x02438807: mov    ebx,ecx
  0x02438809: shl    ebx,1
  0x0243880b: mov    edx,ecx
  0x0243880d: shl    edx,0x2
  0x02438810: mov    eax,ecx
  0x02438812: shl    eax,0x3
  0x02438815: mov    esi,ecx
  0x02438817: shl    esi,0x4
  0x0243881a: mov    ebp,ecx
  0x0243881c: shl    ebp,0x5
  0x0243881f: shl    ecx,0x6
  0x02438822: mov    edi,0x160
  0x02438827: mov    DWORD PTR [edi+0x565c3b0],ebp
                                        ;*putstatic field5
                                        ; - com.bempel.sandbox.TestJIT::assign@27 (line 30)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x0243882d: mov    ebp,0x164
  0x02438832: mov    DWORD PTR [ebp+0x565c3b0],ecx
                                        ;*putstatic field6
                                        ; - com.bempel.sandbox.TestJIT::assign@34 (line 31)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x02438838: mov    ebp,0x150
  0x0243883d: mov    DWORD PTR [ebp+0x565c3b0],ebx
                                        ;*putstatic field1
                                        ; - com.bempel.sandbox.TestJIT::assign@3 (line 26)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x02438843: mov    ecx,0x154
  0x02438848: mov    DWORD PTR [ecx+0x565c3b0],edx
                                        ;*putstatic field2
                                        ; - com.bempel.sandbox.TestJIT::assign@9 (line 27)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x0243884e: mov    ebx,0x158
  0x02438853: mov    DWORD PTR [ebx+0x565c3b0],eax
                                        ;*putstatic field3
                                        ; - com.bempel.sandbox.TestJIT::assign@15 (line 28)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x02438859: mov    ecx,0x15c
  0x0243885e: mov    DWORD PTR [ecx+0x565c3b0],esi
                                        ;*putstatic field4
                                        ; - com.bempel.sandbox.TestJIT::assign@21 (line 29)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x02438864: add    esp,0x8
  0x02438867: pop    ebp
  0x02438868: test   DWORD PTR ds:0x190000,eax
                                        ;   {poll_return}
  0x0243886e: ret    


As you can see in the comments, the order of field assignation is the following: field5, field6, field1, field2, field3.
We now add volatile modifier for field1 & field6:
  # {method} 'assign' '(I)V' in 'com/bempel/sandbox/TestJIT'
  # parm0:    ecx       = int
  #           [sp+0x10]  (sp of caller)
  0x024c8800: push   ebp
  0x024c8801: sub    esp,0x8
  0x024c8807: mov    ebp,ecx
  0x024c8809: shl    ebp,1
  0x024c880b: mov    edx,ecx
  0x024c880d: shl    edx,0x2
  0x024c8810: mov    esi,ecx
  0x024c8812: shl    esi,0x3
  0x024c8815: mov    eax,ecx
  0x024c8817: shl    eax,0x4
  0x024c881a: mov    ebx,ecx
  0x024c881c: shl    ebx,0x5
  0x024c881f: shl    ecx,0x6
  0x024c8822: mov    edi,0x150
  0x024c8827: mov    DWORD PTR [edi+0x562c3b0],ebp
                                        ;*putstatic field1
                                        ; - com.bempel.sandbox.TestJIT::assign@3 (line 26)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c882d: mov    ebp,0x160
  0x024c8832: mov    DWORD PTR [ebp+0x562c3b0],ebx
                                        ;*putstatic field5
                                        ; - com.bempel.sandbox.TestJIT::assign@27 (line 30)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c8838: mov    ebx,0x154
  0x024c883d: mov    DWORD PTR [ebx+0x562c3b0],edx
                                        ;*putstatic field2
                                        ; - com.bempel.sandbox.TestJIT::assign@9 (line 27)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c8843: mov    ebp,0x158
  0x024c8848: mov    DWORD PTR [ebp+0x562c3b0],esi
                                        ;*putstatic field3
                                        ; - com.bempel.sandbox.TestJIT::assign@15 (line 28)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c884e: mov    ebx,0x15c
  0x024c8853: mov    DWORD PTR [ebx+0x562c3b0],eax
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c8859: mov    ebp,0x164
  0x024c885e: mov    DWORD PTR [ebp+0x562c3b0],ecx
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c8864: lock add DWORD PTR [esp],0x0  ;*putstatic field1
                                        ; - com.bempel.sandbox.TestJIT::assign@3 (line 26)
  0x024c8869: add    esp,0x8
  0x024c886c: pop    ebp
  0x024c886d: test   DWORD PTR ds:0x180000,eax
                                        ;   {poll_return}
  0x024c8873: ret    

Now, field1 is really the first field assigned and field 6 is last one. But in the middle we have the following order: field5, field2, field3, field4.

Beside that, CPU is able to reorder the instruction flow in certain circumstances to also optimize the efficiency of instruction execution. Those properties are well summarized in the Intel white paper Memory Ordering.

Write access

From previous example which is only volatile writes you can notice there is a special instruction: lock add. This is a bit weird at first, we add 0 to the memory pointed by SP (Stack Pointer) register. So we are not changing the data at this memory location but with lock prefix, memory related instructions are processed specially to act as a memory barrier, similar to mfence instruction. Dave Dice explains in his blog that they benchmarked the 2 kinds of barrier, and the lock add seems the most efficient one on today's architecture.

So this barrier ensures that there is no reordering before and after this instruction and also drains all instructions pending into Store Buffer. After executing these instructions, all writes are visible to all other threads through cache subsystem or main memory. This costs some latency to wait for this drain.

LazySet

Some time we can relax the constraint on immediate visibility but still keep the ordering guarantee. For this reason, Doug Lea introduces lazySet method for Atomic* objects. Let's use AtomicInteger in replacement of volatile int:

public class TestJIT
{
    private static AtomicInteger field1 = new AtomicInteger(0);
    private static int field2;
    private static int field3;
    private static int field4;
    private static int field5;
    private static AtomicInteger field6 = new AtomicInteger(0);
    
    public static void assign(int i)
    {
        field1.lazySet(i << 1);
        field2 = i << 2;
        field3 = i << 3;        
        field4 = i << 4;
        field5 = i << 5;
        field6.lazySet(i << 6);
    }
    
    public static void main(String[] args) throws Throwable
    {
        for (int i = 0; i < 10000; i++)
        {
            assign(i);
        }
        Thread.sleep(1000);
    }
}

We have the following output for PrintAssembly:

  # {method} 'assign' '(I)V' in 'com/bempel/sandbox/TestJIT'
  # this:     ecx       = 'com/bempel/sandbox/TestJIT'
  # parm0:    edx       = int
  #           [sp+0x10]  (sp of caller)
  0x024c7f40: cmp    eax,DWORD PTR [ecx+0x4]
  0x024c7f43: jne    0x024ace40         ;   {runtime_call}
  0x024c7f49: xchg   ax,ax
[Verified Entry Point]
  0x024c7f4c: mov    DWORD PTR [esp-0x3000],eax
  0x024c7f53: push   ebp
  0x024c7f54: sub    esp,0x8            ;*synchronization entry
                                        ; - com.bempel.sandbox.TestJIT::assign@-1 (line 30)
  0x024c7f5a: mov    ebp,edx
  0x024c7f5c: shl    ebp,1              ;*ishl
                                        ; - com.bempel.sandbox.TestJIT::assign@5 (line 30)
  0x024c7f5e: mov    ebx,0x150
  0x024c7f63: mov    ecx,DWORD PTR [ebx+0x56ec4b8]
                                        ;*getstatic field1
                                        ; - com.bempel.sandbox.TestJIT::assign@0 (line 30)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c7f69: test   ecx,ecx
  0x024c7f6b: je     0x024c7fd0
  0x024c7f6d: mov    DWORD PTR [ecx+0x8],ebp  ;*invokevirtual putOrderedInt
                                        ; - java.util.concurrent.atomic.AtomicInteger::lazySet@8 (line 80)
                                        ; - com.bempel.sandbox.TestJIT::assign@6 (line 30)
  0x024c7f70: mov    edi,edx
  0x024c7f72: shl    edi,0x2
  0x024c7f75: mov    ebp,edx
  0x024c7f77: shl    ebp,0x3
  0x024c7f7a: mov    esi,edx
  0x024c7f7c: shl    esi,0x4
  0x024c7f7f: mov    eax,edx
  0x024c7f81: shl    eax,0x5
  0x024c7f84: shl    edx,0x6            ;*ishl
                                        ; - com.bempel.sandbox.TestJIT::assign@39 (line 35)
  0x024c7f87: mov    ebx,0x154
  0x024c7f8c: mov    ebx,DWORD PTR [ebx+0x56ec4b8]
                                        ;*getstatic field6
                                        ; - com.bempel.sandbox.TestJIT::assign@33 (line 35)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c7f92: mov    ecx,0x158
  0x024c7f97: mov    DWORD PTR [ecx+0x56ec4b8],edi
                                        ;*putstatic field2
                                        ; - com.bempel.sandbox.TestJIT::assign@12 (line 31)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c7f9d: mov    ecx,0x164
  0x024c7fa2: mov    DWORD PTR [ecx+0x56ec4b8],eax
                                        ;*putstatic field5
                                        ; - com.bempel.sandbox.TestJIT::assign@30 (line 34)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c7fa8: mov    edi,0x15c
  0x024c7fad: mov    DWORD PTR [edi+0x56ec4b8],ebp
                                        ;*putstatic field3
                                        ; - com.bempel.sandbox.TestJIT::assign@18 (line 32)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c7fb3: mov    ecx,0x160
  0x024c7fb8: mov    DWORD PTR [ecx+0x56ec4b8],esi
                                        ;*putstatic field4
                                        ; - com.bempel.sandbox.TestJIT::assign@24 (line 33)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024c7fbe: test   ebx,ebx
  0x024c7fc0: je     0x024c7fdd
  0x024c7fc2: mov    DWORD PTR [ebx+0x8],edx  ;*invokevirtual putOrderedInt
                                        ; - java.util.concurrent.atomic.AtomicInteger::lazySet@8 (line 80)
                                        ; - com.bempel.sandbox.TestJIT::assign@40 (line 35)
  0x024c7fc5: add    esp,0x8
  0x024c7fc8: pop    ebp
  0x024c7fc9: test   DWORD PTR ds:0x180000,eax
                                        ;   {poll_return}
  0x024c7fcf: ret    

So now no trace of memory barriers whatsoever: (no mfence, no lock add instruction). But we have the order of field 1 and field 6 remain (first and last) then field2, field5, field3 and field4.
In fact, lazySet method call putOrderedInt from Unsafe object which do not emit memory barrier but guarantee no reordering.

Read Access

We will now examine what is the cost of a volatile read with this example:

public class TestJIT
{
    private static volatile int field1 = 42;
    
    public static void testField1(int i)
    {
        if (field1 < 0)
        {
            System.out.println("field value: " + field1);
        }
    }
    
    public static void main(String[] args) throws Throwable
    {
        for (int i = 0; i < 10000; i++)
        {
            testField1(i);
        }
        Thread.sleep(1000);
    }
}

The PrintAssembly output looks like:

  # {method} 'testField1' '(I)V' in 'com/bempel/sandbox/TestJIT'
  # parm0:    ecx       = int
  #           [sp+0x10]  (sp of caller)
  0x024f8800: mov    DWORD PTR [esp-0x3000],eax
  0x024f8807: push   ebp
  0x024f8808: sub    esp,0x8            ;*synchronization entry
                                        ; - com.bempel.sandbox.TestJIT::testField1@-1 (line 22)
  0x024f880e: mov    ebx,0x150
  0x024f8813: mov    ecx,DWORD PTR [ebx+0x571c418]
                                        ;*getstatic field1
                                        ; - com.bempel.sandbox.TestJIT::testField1@0 (line 22)
                                        ;   {oop('com/bempel/sandbox/TestJIT')}
  0x024f8819: test   ecx,ecx
  0x024f881b: jl     0x024f8828         ;*ifge
                                        ; - com.bempel.sandbox.TestJIT::testField1@3 (line 22)
  0x024f881d: add    esp,0x8
  0x024f8820: pop    ebp
  0x024f8821: test   DWORD PTR ds:0x190000,eax
                                        ;   {poll_return}
  0x024f8827: ret    

No memory barrier. Only a load from memory address to a register. This is what is required for a volatile read. JIT cannot optimize to cache it into a register. But with cache subsystem, latency for volatile read is very similar compared to regular variable read. If you test the same Java code without volatile modifier the result for this test is in fact the same.
Volatile reads put also some constraint on reordering.

Summary

Volatile access prevents instructions reordering either at compiler level or at hardware level.
It ensures visibility, for write with the price of memory barriers, for read with no register caching or reordering possibilities.

References


Saturday, May 23, 2015

Measuring contention on locks

Locks is one of the major bottleneck in scalability for your code. Lock-Free structures offer an alternative to your lock usage. However it is sometimes more difficult to use or to integrate into your existing code. Before rushing into this specific approach, it would be interesting to determine which part of your locking code would really benefit from this. Locks become a bottleneck if more than one thread tries to acquire them and needs to wait for a release. This is contention. Measuring this contention will help us to pinpoint which locks need to be improved.

You see in this Java world there are two kinds of locks. Those with synchronized blocks, and those which use java.util.concurrent.Lock. The first ones are directly handled by the JVM with the help of a specific byte code (monitorenter & monitorexit). With those, JVM provides, via JVMTI, some events that can be used by native agents to get information about synchronized blocks: MonitorContendedEnter & MonitorContentedEntered.
Profilers like YourKit exploit this information to provide contention profiling.



Azul Systems provides with their Zing JVM a tool named ZVision to also profile the synchronized blocks:

AzulZvision_syncLocks.png

But what about j.u.c.Lock ?

Here, this is trickier: j.u.c.Lock is not handled by the JVM. It is part of the JDK classes like any regular library. No special treatment. Hence, no special information about them.
YourKit is not able to profile them. However, another profiler which is able to profile j.u.c.Lock exists: JProfiler.

JProfiler_SyncLocks.png

JProfiler_jucLocks.png

I suppose it uses instrumentation to insert callbacks when j.u.c.Lock classes are used. Profile information seems precise with callstacks. It helps to identify which locks are effectively in contention.

I have also found an article describing jucProfiler, a tool created by an IBM team.

Before finding those 2 last tools, I made a very light and quick j.u.c.Lock profiler on my own. The technique is simple and I will describe it below. Full code available on GitHub.
The goal was to profile existing code using j.u.c.Locks. The plan wasn't to create any wrapper around them or any subtype, but to intercept the actual classes. I copied the JDK classes and kept them in the same package.
I have identified in the code where the lock fails to be acquired. It means in those cases there is a contention with another thread which already holds the current lock.
Here is an extract from ReentrantLock:

I incremented a contention field to keep track of the fail attempts. No need to have an atomic counter for this. If some contended locks are not precisely reported, it is not a big deal.

Each lock instance is identified at the construction by a unique ID and a callstack, stored into a global static map.

When you want to report the statistics about lock contention, you can traverse the map and print the information about each lock including the number of contentions detected.

To use those classes instead of the ones from the JDK you need to prepend your jar containing them into the bootclasspath. This way, it is your classes that are looked up before those contained into the rt.jar

-Xbootclasspath/p:contention-profiling.jar  

Here is an example of the output of the profiler:

The line creating lock n indicates where the lock id n is created. The line with equals reports you for a locking id (left of the equal), the number of time that lock has contended (right of the equal).
Then it helps you to focus and work on the most contented ones first.

And remember: Measure, Don't Premature!

Thanks Georges & Aurélien for the review.

Friday, May 9, 2014

Branches: I have lost my path!

At Devoxx France 2014, I made a talk about Hardware Performance Counters, including examples I had already blogged about previously (first & second). But I have also included a third one showing branch mispredictions measurement and effects.

For this example, I was inspired by this question on stackoverflow. There are very good explanations of this phenomenon in answers, I encourage you to read them.

I am more interested in measuring this effect to be able to pinpoint this kind of issues in the future in my code. I have rewritten the code of the example as follow:

import java.util.Random;
import java.util.Arrays;

public class CondLoop
{
    final static int COUNT = 64*1024;
    static Random random = new Random(System.currentTimeMillis());

    private static int[] createData(int count, boolean warmup, boolean predict)
    {
        int[] data = new int[count];
        for (int i = 0; i < count; i++)
        {
            data[i] = warmup ? random.nextInt(2) 
                             : (predict ? 1 : random.nextInt(2));
        }
        return data;
    }
    
    private static int benchCondLoop(int[] data)
    {
        long ms = System.currentTimeMillis();
        HWCounters.start();
        int sum = 0;
        for (int i = 0; i < data.length; i++)
        {
            if (data[i] == 1)
         sum += i;
        }
        HWCounters.stop();
        return sum;
    }

    public static void main(String[] args) throws Exception
    {
        boolean predictable = Boolean.parseBoolean(args[0]);
        HWCounters.init();
        int count = 0;
        for (int i = 0; i < 10000; i++)
        {
            int[] data = createData(1024, true, predictable); 
            count += benchCondLoop(data);
        }
        System.out.println("warmup done");
        Thread.sleep(1000);
        int[] data = createData(512*1024, false, predictable); 
        count += benchCondLoop(data);
        HWCounters.printResults();
        System.out.println(count);
        HWCounters.shutdown();
    }
}


I have 2 modes: one is completely predictable with only 1s into the array, and the other is unpredictable with array filled with 0s and 1s randomly.
When I run my code with HPC including branch mispredictions counter on a 2 Xeon X5680 (Westmere) machine I get the following results:

[root@archi-srv condloop]# java -cp overseer.jar:. CondLoop true
warmup done
Cycles: 2,039,751
branch mispredicted: 20
-1676149632

[root@archi-srv condloop]# java -cp overseer.jar:. CondLoop false
warmup done
Cycles: 2,042,371
branch mispredicted: 20
-1558729579

We can see there is no difference between the 2 modes. In fact there is caveat in my example: It is too simple and the JIT compiler is able to perform an optimization I was not aware of at this time. To understand what's going on with my example, I made a tour with my old friend PrintAssembly as usual! (Note: I am using the intel syntax with the help of -XX:PrintAssemblyOptions=intel because well I am running on x86_64 CPU so let's use their syntax!)

  # {method} 'benchCondLoop' '([I)I' in 'CondLoop'
  [...]
  0x00007fe45105fcc9: cmp    ebp,ecx
  0x00007fe45105fccb: jae    0x00007fe45105fe27  ;*iaload
                                         ; - CondLoop::benchCondLoop@15 (line 28)
  0x00007fe45105fcd1: mov    r8d,DWORD PTR [rbx+rbp*4+0x10]
  0x00007fe45105fcd6: mov    edx,ebp
  0x00007fe45105fcd8: add    edx,r13d
  0x00007fe45105fcdb: cmp    r8d,0x1
  0x00007fe45105fcdf: cmovne edx,r13d
  0x00007fe45105fce3: inc    ebp                ;*iinc
                                         ; - CondLoop::benchCondLoop@24 (line 26)
  0x00007fe45105fce5: cmp    ebp,r10d
  0x00007fe45105fce8: jge    0x00007fe45105fcef  ;*if_icmpge
                                         ; - CondLoop::benchCondLoop@10 (line 26)
  0x00007fe45105fcea: mov    r13d,edx
  0x00007fe45105fced: jmp    0x00007fe45105fcc9
  [...]


The output shows a special instruction that I was not familiar with: cmovne. But it reminds me a thread in mechanical sympathy forum about this instruction (That's why it is important to read this forum!).
It seems this instruction is used specifically to avoid branch mispredictions.
Then, let's rewrite my condition with a more complex one:

    private static int benchCondLoop(int[] data)
    {
        long ms = System.currentTimeMillis();
        HWCounters.start();
        int sum = 0;
        for (int i = 0; i < data.length; i++)
        {
            if (i+ms > 0 && data[i] == 1)
         sum += i;
        }
        HWCounters.stop();
        return sum;
    }

Here are now the results:
[root@archi-srv condloop]# java -cp overseer.jar:. CondLoop true
warmup done
Cycles: 2,114,347
branch mispredicted: 21
-1677344554

[root@archi-srv condloop]# java -cp overseer.jar:. CondLoop false
warmup done
Cycles: 7,471,464
branch mispredicted: 261,988
-1541838686

See, number of cycles jump off the roof: more than 3x cycles ! Remember that a misprediction for CPU means a flush of the pipeline to decode instructions from the new address and it causes a Stop-Of-The-World during this time. Depending of the CPU it lasts 10 to 20 cycles.

In stackoverflow question, sorting the array improved a lot the test. Let's do the same:

 int[] data = createData(512*1024, false, predictable); 
 Arrays.sort(data);
 count += benchCondLoop(data);


[root@archi-srv condloop]# java -cp overseer.jar:. CondLoop false
warmup done
Cycles: 2,112,265
branch mispredicted: 34
-1659649448

This is indeed very efficient, we are now more predictable.

You can find the code of this example on my github

Tuesday, December 17, 2013

ArrayList vs LinkedList

This post was originally posted on Java Advent Calendar

I must confess the title of this post is a little bit catchy. I have recently read this blog post and this is a good summary of  discussions & debates about this subject.
But this time I would like to try a different approach to compare those 2 well known data structures: using Hardware Performance Counters.

I will not perform a micro-benchmark, well not directly. I will not time using System.nanoTime(), but rather using HPCs like cache hits/misses.

No need to present those data structures, everybody knows what they are using for and how they are implemented. I am focusing my study on list iteration because, beside adding an element, this is the most common task for a list. And also because the memory access pattern for a list is a good example of CPU cache interaction.


Here my code for measuring list iteration for LinkedList & ArrayList:

import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;

import ch.usi.overseer.OverHpc;

public class ListIteration
{
    private static List<String> arrayList = new ArrayList<>();
    private static List<String> linkedList = new LinkedList<>();

    public static void initializeList(List<String> list, int bufferSize)
    {
        for (int i = 0; i < 50000; i++)
        {
            byte[] buffer = null;
            if (bufferSize > 0)
            {
                buffer = new byte[bufferSize];
            }
            String s = String.valueOf(i);
            list.add(s);
            // avoid buffer to be optimized away
            if (System.currentTimeMillis() == 0)
            {
                System.out.println(buffer);
            }
        }
    }

    public static void bench(List<String> list)
    {
        if (list.contains("bar"))
        {
            System.out.println("bar found");
        }
    }

    public static void main(String[] args) throws Exception
    {
        if (args.length != 2) return;
        List<String> benchList = "array".equals(args[0]) ? arrayList : linkedList;
        int bufferSize = Integer.parseInt(args[1]);
        initializeList(benchList, bufferSize);
        HWCounters.init();
        System.out.println("init done");
        // warmup
        for (int i = 0; i < 10000; i++)
        {
            bench(benchList);
        }
        Thread.sleep(1000);
        System.out.println("warmup done");

        HWCounters.start();
        for (int i = 0; i < 1000; i++)
        {
            bench(benchList);
        }
        HWCounters.stop();
        HWCounters.printResults();
        HWCounters.shutdown();
    }
}

To measure, I am using a class called HWCounters based on overseer library to get Hardware Performance Counters. You can find the code of this class here.

The program take 2 parameters: the first one to choose between ArrayList implementation or LinkedList one, the second one to take a buffer size used in initializeList method. This method fills a list implementation with 50K strings. Each string is newly created just before add it to the list. We may also allocate a buffer based on our second parameter of the program. if 0, no buffer is allocated.
bench method performs a search of a constant string which is not contained into the list, so we fully traverse the list.
Finally, main method, perform initialization of the list, warmups the bench method and measure 1000 runs of this method. Then, we print results from HPCs.

Let's run our program with no buffer allocation on Linux with 2 Xeon X5680:

[root@archi-srv]# java -cp .:overseer.jar com.ullink.perf.myths.ListIteration array 0
init done
warmup done
Cycles: 428,711,720
Instructions: 776,215,597
L2 hits: 5,302,792
L2 misses: 23,702,079
LLC hits: 42,933,789
LLC misses: 73
CPU migrations: 0
Local DRAM: 0
Remote DRAM: 0

[root@archi-srv]# /java -cp .:overseer.jar com.ullink.perf.myths.ListIteration linked 0
init done
warmup done
Cycles: 767,019,336
Instructions: 874,081,196
L2 hits: 61,489,499
L2 misses: 2,499,227
LLC hits: 3,788,468
LLC misses: 0
CPU migrations: 0
Local DRAM: 0
Remote DRAM: 0

First run is on the ArrayList implementation, second with LinkedList.

  • Number of cycles is the number of CPU cycle spent on executing our code. Clearly LinkedList spent much more cycles than ArrayList.
  • Instructions is little higher for LinkedList. But it is not so significant here.
  • For L2 cache accesses we have a clear difference: ArrayList has significant more L2 misses compared to LinkedList.  
  • Mechanically, LLC hits are very important for ArrayList.

The conclusion on this comparison is that most of the data accessed during list iteration is located into L2 for LinkedList but into L3 for ArrayList.
My explanation for this is that strings added to the list are created right before. For LinkedList it means that it is local the Node entry that is created when adding the element. We have more locality with the node.

But let's re-run the comparison with intermediary buffer allocated for each new String added.


[root@archi-srv]# java -cp .:overseer.jar com.ullink.perf.myths.ListIteration array 256
init done
warmup done
Cycles: 584,965,201
Instructions: 774,373,285
L2 hits: 952,193
L2 misses: 62,840,804
LLC hits: 63,126,049
LLC misses: 4,416
CPU migrations: 0
Local DRAM: 824
Remote DRAM: 0

[root@archi-srv]# java -cp .:overseer.jar com.ullink.perf.myths.ListIteration linked 256
init done
warmup done
Cycles: 5,289,317,879
Instructions: 874,350,022
L2 hits: 1,487,037
L2 misses: 75,500,984
LLC hits: 81,881,688
LLC misses: 5,826,435
CPU migrations: 0
Local DRAM: 1,645,436
Remote DRAM: 1,042

Here the results are quite different:

  • Cycles are 10 times more important.
  • Instructions stay the same as previously
  • For cache accesses, ArrayList have more L2 misses/LLC hits, than previous run, but still in the same magnitude order. LinkedList on the contrary have a lot more L2 misses/LLC hits, but moreover a significant number of LLC misses/DRAM accesses. And the difference is here.

With the intermediary buffer, we are pushing away entries and strings, which generate more cache misses and the end also DRAM accesses which is much more slower than hitting caches.
ArrayList is more predictable here since we keep locality of element from each other.

The memory access pattern here is crucial for list iteration performance. ArrayList is more stable than LinkedList in the way that whatever you are doing between each element adding, you are keeping your data  much more local than the LinkedList.
Remember also that, iterating through an array is much more efficient for CPU since it can trigger Hardware Prefetching because access pattern is very predictable.