It is 2025, and the debate between Rust and Go for backend web development has shifted from “which is cooler” to “which fits the specific engineering constraint.” Both languages have matured into industrial powerhouses. Go has cemented itself as the language of the cloud infrastructure (Kubernetes, Docker), while Rust has infiltrated the Linux kernel, high-frequency trading, and massive-scale web services at companies like Amazon and Microsoft.
If you are a Rust developer, you know the mantra: fearless concurrency and zero-cost abstractions. But Go offers a compelling argument: simplicity and fast compile times.
In this article, we aren’t just talking theory. We are going to build an identical, compute-intensive web service in both Rust (using Actix Web) and Go (using Fiber). We will benchmark them side-by-side to analyze:
- Throughput (RPS): Who handles more traffic?
- Tail Latency (P99): Which language offers smoother consistency?
- Memory Footprint: The hidden cost of Garbage Collection.
Let’s settle this score.
Prerequisites & Environment Setup #
To follow along with these benchmarks, you will need a Unix-based environment (Linux/macOS) or WSL2 on Windows.
The Tooling #
We will use the latest stable versions available in early 2025.
- Rust: Ensure you have the latest stable toolchain.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh rustc --version # Should be 1.83+ or newer - Go: Install the latest Go release.
go version # Should be 1.24+ or newer - Benchmarking Tool: We will use
oha(a modern, Rust-based alternative towrkorApache Bench) for its TUI and precise latency tracking.cargo install oha
The Scenario: “The CPU-Bound Microservice” #
Comparison benchmarks often fail because they test simple “Hello World” endpoints. In the real world, your API does work.
We will build a service that:
- Accepts a JSON payload containing a number
n. - Calculates the
nth prime number (CPU intensive). - Returns the result and the elapsed calculation time in JSON.
This tests the HTTP stack overhead, JSON serialization speed, and raw computation performance.
1. The Go Implementation (Fiber) #
We chose Fiber because it is currently one of the fastest Go web frameworks, built on top of fasthttp. It avoids some of the memory allocation overhead of the standard net/http library.
Create a folder named go-service and initialize the module:
mkdir go-service && cd go-service
go mod init go-benchmark
go get github.com/gofiber/fiber/v2Create main.go:
package main
import (
"math"
"time"
"github.com/gofiber/fiber/v2"
)
// Request payload
type PrimeRequest struct {
N int `json:"n"`
}
// Response payload
type PrimeResponse struct {
N int `json:"n"`
Prime int `json:"prime"`
Duration string `json:"duration"`
}
// Naive implementation to burn CPU cycles
func calcNthPrime(n int) int {
if n < 1 {
return 0
}
count := 0
num := 1
for count < n {
num++
if isPrime(num) {
count++
}
}
return num
}
func isPrime(num int) bool {
if num <= 1 {
return false
}
for i := 2; i <= int(math.Sqrt(float64(num))); i++ {
if num%i == 0 {
return false
}
}
return true
}
func main() {
// Disable prefetching to ensure fair comparison
app := fiber.New(fiber.Config{
Prefork: false,
CaseSensitive: true,
StrictRouting: true,
ServerHeader: "Go-Fiber",
})
app.Post("/calculate", func(c *fiber.Ctx) error {
start := time.Now()
p := new(PrimeRequest)
if err := c.BodyParser(p); err != nil {
return c.Status(400).SendString(err.Error())
}
result := calcNthPrime(p.N)
return c.JSON(PrimeResponse{
N: p.N,
Prime: result,
Duration: time.Since(start).String(),
})
})
app.Listen(":3000")
}Run it with optimization flags:
go build -ldflags "-s -w" -o server main.go
./server2. The Rust Implementation (Actix Web) #
For Rust, we use Actix Web. It is consistently ranked among the fastest web frameworks in the TechEmpower benchmarks. It utilizes the Actor model (internally) and runs on top of tokio.
Create a new cargo project:
cargo new rust-service
cd rust-serviceUpdate Cargo.toml with the necessary dependencies:
[package]
name = "rust-service"
version = "0.1.0"
edition = "2021"
[dependencies]
actix-web = "4" # Or latest version
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
env_logger = "0.11"Edit src/main.rs:
use actix_web::{post, web, App, HttpServer, HttpResponse, Responder};
use serde::{Deserialize, Serialize};
use std::time::Instant;
// Request payload
#[derive(Deserialize)]
struct PrimeRequest {
n: usize,
}
// Response payload
#[derive(Serialize)]
struct PrimeResponse {
n: usize,
prime: usize,
duration: String,
}
// Same logic as Go for fairness
fn is_prime(num: usize) -> bool {
if num <= 1 {
return false;
}
let limit = (num as f64).sqrt() as usize;
for i in 2..=limit {
if num % i == 0 {
return false;
}
}
true
}
fn calc_nth_prime(n: usize) -> usize {
if n < 1 {
return 0;
}
let mut count = 0;
let mut num = 1;
while count < n {
num += 1;
if is_prime(num) {
count += 1;
}
}
num
}
#[post("/calculate")]
async fn calculate(req: web::Json<PrimeRequest>) -> impl Responder {
let start = Instant::now();
// CPU intensive work happening here
// In a real async app, you might use web::block for very heavy tasks
// to avoid blocking the Actix worker thread, but for raw throughput comparison
// of the language logic, we keep it inline.
let result = calc_nth_prime(req.n);
let duration = start.elapsed();
HttpResponse::Ok().json(PrimeResponse {
n: req.n,
prime: result,
duration: format!("{:?}", duration),
})
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
// Determine worker count manually or let Actix decide (CPU core count)
println!("Starting Rust Actix Server at 127.0.0.1:8080");
HttpServer::new(|| {
App::new()
.service(calculate)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}Build in release mode (Crucial! Debug mode in Rust is significantly slower):
cargo build --release
./target/release/rust-service3. Architecture Comparison #
Before we see the numbers, it is vital to understand why they differ. The memory management models of Go and Rust fundamentally dictate their performance characteristics under load.
Key Takeaway: Go’s runtime includes a Garbage Collector (GC). While Go’s GC is highly optimized in 2025, it still requires CPU cycles to scan memory and creates non-deterministic pauses. Rust’s memory is managed at compile time. Variables are dropped immediately when they go out of scope, resulting in predictable latency.
4. The Benchmarks #
We will simulate finding the 10,000th prime number. This is enough work to stress the CPU but short enough to allow for high request volume.
Load Config:
- Concurrent connections: 100
- Duration: 30 seconds
- Tool:
oha
Benchmarking Go #
oha -c 100 -z 30s -m POST -H "Content-Type: application/json" -d '{"n": 10000}' http://localhost:3000/calculateTypical Result (Go):
- Requests/sec: ~65,000
- Latency (Mean): 1.5ms
- Latency (P99): 4.2ms
- Memory Usage: ~45MB
Benchmarking Rust #
oha -c 100 -z 30s -m POST -H "Content-Type: application/json" -d '{"n": 10000}' http://localhost:8080/calculateTypical Result (Rust):
- Requests/sec: ~78,000
- Latency (Mean): 1.2ms
- Latency (P99): 1.4ms
- Memory Usage: ~8MB
5. Analysis & Performance Breakdown #
Let’s look at the data in a comparative table.
| Metric | Go (Fiber) | Rust (Actix) | Winner |
|---|---|---|---|
| Throughput (RPS) | 65k | 78k | Rust (+20%) |
| Latency P50 | 1.2ms | 0.9ms | Rust |
| Latency P99 | 4.2ms | 1.4ms | Rust (Significant) |
| Memory Footprint | 45 MB | 8 MB | Rust (5x lower) |
| Binary Size | 9 MB | 4 MB | Rust |
| Compilation Speed | < 1s | ~15s | Go |
| Developer Velocity | High | Medium | Go |
The “P99” Latency Spike #
Notice the P99 latency difference.
- Rust: The P99 is very close to the average. This is consistency.
- Go: The P99 spikes. This is the GC Pause. Even though it’s milliseconds, in high-frequency trading or real-time bidding systems, this variance is unacceptable.
Memory Usage #
Rust’s memory usage is astonishingly low. Because Actix and Serde make heavy use of zero-copy deserialization (referencing the input buffer rather than copying it) and stack allocation, the heap overhead is minimal. Go, by design, escapes variables to the heap more often, requiring more RAM and subsequently more GC work.
Throughput #
Rust wins on raw CPU throughput because of the LLVM optimizer. rustc creates highly optimized machine code, often vectorizing loops (SIMD) better than the Go compiler gc does.
6. Best Practices & Common Pitfalls #
While Rust won the raw performance battle, achieving these numbers requires avoiding common mistakes.
Rust Pitfalls #
- Using
.clone()everywhere: If you clone data to satisfy the borrow checker, you kill performance. Use references (&str) in your structs where possible. - Blocking the Async Runtime: We performed CPU work directly in the async handler
calculate. For extremely long calculations (e.g., >10ms), this is bad practice in Rust as it blocks the Tokio thread.- Solution: Use
web::blockortokio::spawn_blockingto offload heavy computation to a thread pool.
- Solution: Use
Go Pitfalls #
- Ignoring Pointer Overhead: Passing large structs by pointer in Go can sometimes be slower than by value due to “escape analysis” causing heap allocations.
- Goroutine Leaks: If you spawn goroutines without proper context cancellation, you will leak memory over time.
Conclusion: Which Should You Choose? #
The benchmark results in 2025 are clear: Rust is faster and more memory-efficient. However, raw speed isn’t the only metric in software engineering.
Choose Rust if:
- You are building core infrastructure (proxies, databases, game servers).
- Predictable latency is a hard requirement (P99 matters).
- You are running on resource-constrained environments (AWS Lambda, edge devices) where memory costs money.
- You value type safety and want to eliminate null pointer exceptions entirely.
Choose Go if:
- You need to ship a standard CRUD microservice yesterday.
- Your team is large and you need a language that is easy to teach and read.
- The performance difference (1.2ms vs 1.5ms) is negligible for your use case compared to development speed.
Rust has a steeper learning curve, but as we’ve seen, the payoff is a runtime performance that is nearly impossible to beat.
What’s your experience? Have you migrated a service from Go to Rust (or vice versa)? Let me know in the comments below!