Performance Comparison between Python and Ruby

Stokry - Jul 27 '23 - - Dev Community

Python and Ruby are two popular interpreted programming languages that have gained immense popularity among developers. Both languages are known for their simplicity, readability, and ease of use. However, when it comes to performance, there are significant differences between the two languages. In this blog post, we will compare the performance of Python and Ruby and see which one is faster.

Python

Python is an interpreted, high-level, general-purpose programming language that emphasizes code readability and ease of use. Python is known for its simplicity, clean syntax, and vast library support. Python is also known for its excellent performance and has been used in various high-performance applications such as scientific computing, machine learning, and web development.

Python's performance is due to its efficient memory management and fast execution speed. Python uses a technology called the Global Interpreter Lock (GIL) that ensures that only one thread executes Python bytecode at a time. This allows Python to efficiently manage memory and execute code quickly.

Ruby

Ruby is another popular interpreted programming language that is known for its simplicity, readability, and expressiveness. Ruby is often used for web development, scripting, and automation. Ruby has a similar syntax to Python and is known for its easy-to-read code.

When it comes to performance, Ruby tends to be slower than Python. Ruby's performance is due to its lack of efficient memory management and slow execution speed. Ruby does not use the GIL, which means that it cannot manage memory as efficiently as Python. This can lead to slower execution times when running complex programs or applications.

Performance Comparison

Comparing the performance of programming languages can be tricky, as it depends on various factors such as the nature of the task, specific implementations of the algorithm in each language, computer hardware, and more. Nonetheless, to provide an example, I will use a basic mathematical operation: calculating Fibonacci numbers, which is a common benchmark. This task is reasonably CPU-intensive and should demonstrate the difference in performance between Python and Ruby. The goal is to calculate the Nth Fibonacci number, where N is a large number.

Let's jump to the code:

First we will write Python code:

import time

def fib(n):
    if n <= 1:
       return n
    else:
       return (fib(n-1) + fib(n-2))

n = 35

start_time = time.time()
print(fib(n))
end_time = time.time()

print("Execution Time:", end_time - start_time)
Enter fullscreen mode Exit fullscreen mode

Here is the Ruby code:

require  'time'

def fib(n)
if n <= 1
return n
else
return (fib(n-1) + fib(n-2))
end
end

n = 35

start_time = Time.now
puts fib(n)
end_time = Time.now

puts "Execution Time: #{end_time - start_time}"

Enter fullscreen mode Exit fullscreen mode




Results

enter image description here

The results show the time taken to compute the 35th Fibonacci number using a recursive function in both Python and Ruby.

  • Python took approximately 3.125 seconds
  • Ruby took approximately 0.889 seconds

The performance can vary greatly depending on the type of task, the specific implementation, and the underlying hardware. This example is relatively simple, and more complex tasks might show different results.

Thank you all.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .