Give:
class Fibonacci extends RecursiveTask<Integer> {
final int n;
Fibonacci(int n) { this.n = n; }
Integer compute() {
if (n <= 1)
return n;
Fibonacci f1 = new Fibonacci(n – 1);
f1.fork();
Fibonacci f2 = new Fibonacci(n – 2);
return f2.compute() + f1.join(); // Line X
}
}
Suppose that line X is replace with:
return f1.join()+f2.compute() ; // Line X
What is the likely result?
A.
The program produces the correct result, with similar performance to the original.
B.
The program produces the correct result, with performance degraded to the equivalent of being single-threaded.
C.
The program produces an incorrect result.
D.
The program goes into an infinite loop.
E.
An exception is thrown at runtime.
F.
The program produces the correct result, with better performance than the original.
Explanation:
join()does not proceed until the task’s result has been computed. Here we start to wait before doing the computing. The code will not finish.
Correct Answer is B.
Answer is B
However, besides being a dumb way to compute Fibonacci functions (there is a simple fast linear algorithm that you’d use in practice), this is likely to perform poorly because the smallest subtasks are too small to be worthwhile splitting up. Instead, as is the case for nearly all fork/join applications, you’d pick some minimum granularity size (for example 10 here) for which you always sequentially solve rather than subdividing.
Reference : http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/RecursiveTask.html
B
Got the code to run using
ForkJoinPool p = new ForkJoinPool(5);
p.invoke(new Fibonacci(25));
in the main method. But does anyone have a performance (time taken) result using the code as compared to a single-threaded, sequential version?
Thank you for the information about ForkJoinPool and invoke.
B, because each task waits other subtasks to complete, and then compute itself.
+1