I have recently started to look into Java so that I might have a better insight into OOP and design patterns. When playing around with a simple loop I thought about comparing the speeds between Python and Java and after realising that Java was much faster than Python I decided to compare all the languages I am familiar in. Below is a rambling study of how Java, Golang, Python, Ruby and Javascript handle a for loop with 10,000 operations to calculate. I have included the results below in an easy to read table. If you'd like to see the methodology you can continue reading.
In order to compare all the languages I employed the use of two for loops with an increasing complexity by incrementing a variable by the counter. See the example below in Java. You can see that the j loop repeats 10,000 times and with each loop adds j to the int total. I repeat this 10 times and take the average time of all the loops. By no surprise to anyone Java performs this very quickly with a each loop taking around 120500 nanoseconds.
public class Speed {
public static void main(String[] args) {
long totalTime = 0;
for (int i = 0; i < 10; i++) {
long startTime = System.nanoTime();
int total = 0;
for (int j = 0; j < 10000; j++) {
total += j;
}
long endTime = System.nanoTime();
totalTime += endTime - startTime;
}
System.out.println(totalTime / 10);
}
}
If we compare this with an identical loop in Python (105580 nanoseconds) we can see that Java performs exceptionally well. Of course there is some discussion to be had Java's syntax and how much more text there is, how much more difficult it is to write, etc. However if we think about automated tasks that need to be repeated hundreds and thousands of times then aa Java loop would only have to be performed 1,000,000,000 times for there to be a second saved overall. When it comes to API calls or sorting through large volumes of text then Java would hold the obvious advantage but for quick and "hacky" solutions that require running once, maybe twice then obviously Python is still king.
import time
total_time = 0
for i in range(10):
start = time.time_ns()
total = 0
for j in range(10000):
total += j
end = time.time_ns()
total_time += end - start
print(total_time/10)
From here onwards I compared the same loop in multiple other languages in which I will present just the code an the times they ran.
Golang - 27619 nanoseconds
package main
import (
"fmt"
"time"
)
func main() {
var totalTime time.Duration
for j := 0; j < 10; j++ {
startTime := time.Now()
total := 0
for i := 0; i < 10000; i++ {
total += i
}
totalTime += time.Since(startTime)
}
fmt.Println(totalTime.Nanoseconds() / 10)
}
Ruby - 382600 nanoseconds
total_time = 0
10.times do
start_time = Time.now
total = 0
for i in 1..10_000 do
total += i
end
end_time = Time.now
total_time += end_time - start_time
end
Node.js - 698809 nanoseconds
const { performance } = require("perf_hooks");
let totalTime = 0.0;
for (let j = 0; j < 10; j++) {
var t0 = performance.now();
let total = 0;
for (let i = 0; i < 10000; i++) {
total += i;
}
var t1 = performance.now();
totalTime += t1 - t0;
}
console.log(totalTime * 1000000 / 10); // miliseconds to nanoseconds
Although the time differences look big there are obviously a lot of things to factor in. Structuring a project in Python is much easier than doing so in Golang or Java and so that fractions of a second you might save in performance you will no doubt make up struggling to have your own code reach certain folders. Golang is obviously the fastest here (due to how it compiles into machine code) and so it's the obvious choice for things like docker containers, cloud based servers and anything that required hundreds of thousands of pings.