In the landscape of modern backend development in 2025, users have zero tolerance for slow or dumb search bars. Whether you are building an e-commerce platform, a log aggregator, or a content management system, the expectation is Google-like speed and relevancy.
While PostgreSQL and MySQL are fantastic databases, their LIKE %...% queries are performance killers at scale. This is where Elasticsearch shines.
In this guide, we are going to bypass the theory and go straight into building a production-ready search service using Go (Golang) and the official Elasticsearch v8 client. We will cover environment setup, bulk indexing strategies, and executing complex multi-match queries.
Why Elasticsearch and Go? #
Go’s concurrency model is a match made in heaven for Elasticsearch’s distributed nature. Go can handle high-throughput concurrent requests to ingest data into Elasticsearch, while Elasticsearch handles the heavy lifting of TF/IDF calculations and inverted indices.
Before we write code, let’s look at the architectural flow we are about to build:
Prerequisites & Environment Setup #
To follow this tutorial, ensure you have the following ready:
- Go 1.23+: We are using the latest stable version for better loop variable semantics.
- Docker & Docker Compose: To spin up Elasticsearch locally.
- IDE: VS Code or GoLand.
1. Setting up Elasticsearch v8 #
Running Elasticsearch v8 requires handling security features (SSL/TLS and Authentication) by default. For a development environment, we will disable SSL verification to keep our code clean, though never do this in production.
Create a docker-compose.yml file in your project root:
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.15.0
container_name: es_dev
environment:
- discovery.type=single-node
- xpack.security.enabled=false # Disabled for local dev simplicity
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
networks:
- go-elastic-net
networks:
go-elastic-net:
driver: bridgeRun the container:
docker-compose up -dVerify it’s running by curling http://localhost:9200. You should see the tagline “You Know, for Search”.
2. Project Initialization #
Initialize your Go module and install the official Elastic client. We are strictly using the v8 library, as v7 is legacy in 2025.
mkdir go-search-engine
cd go-search-engine
go mod init github.com/yourname/go-search-engine
go get github.com/elastic/go-elasticsearch/v8Step 1: Defining the Data Model #
Let’s imagine we are building a product search for an electronics store. We need a robust struct that represents our document.
Create a file named model.go:
package main
// Product represents the document stored in Elasticsearch
type Product struct {
ID string `json:"id"`
Title string `json:"title"`
Description string `json:"description"`
Category string `json:"category"`
Price float64 `json:"price"`
Tags []string `json:"tags"`
}Step 2: Configuring the Client #
Connecting to Elasticsearch isn’t just about the URL. You need to handle retries and connection pooling. The official client does much of this for you, but explicit configuration is best.
Create main.go and set up the client initialization:
package main
import (
"log"
"os"
elasticsearch "github.com/elastic/go-elasticsearch/v8"
)
func getESClient() (*elasticsearch.Client, error) {
cfg := elasticsearch.Config{
Addresses: []string{
"http://localhost:9200",
},
// In production, uncomment and use API keys or Basic Auth
// Username: "elastic",
// Password: "changeme",
}
es, err := elasticsearch.NewClient(cfg)
if err != nil {
return nil, err
}
// Verify connection
res, err := es.Info()
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.IsError() {
log.Fatalf("Error: %s", res.String())
}
return es, nil
}
func main() {
es, err := getESClient()
if err != nil {
log.Fatalf("Error creating the client: %s", err)
}
log.Println("Elasticsearch connection established successfully.")
// We will add logic here later
}Step 3: Index Management and Mapping #
A common mistake is letting Elasticsearch “guess” your data types (Dynamic Mapping). While convenient, it often leads to poor performance. For example, it might treat a timestamp as a string.
We must explicitly define our index mapping.
Add this function to main.go:
import (
"strings"
// ... other imports
)
func createIndex(es *elasticsearch.Client, indexName string) {
mapping := `
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings": {
"properties": {
"title": { "type": "text" },
"description": { "type": "text" },
"category": { "type": "keyword" },
"price": { "type": "float" },
"tags": { "type": "keyword" }
}
}
}`
res, err := es.Indices.Create(
indexName,
es.Indices.Create.WithBody(strings.NewReader(mapping)),
)
if err != nil {
log.Fatal(err)
}
defer res.Body.Close()
if res.IsError() {
// Ignore error if index already exists
log.Printf("Index creation warning: %s", res.String())
} else {
log.Println("Index created successfully")
}
}Key Concept: Notice we use text for fields we want full-text search on (Title, Description) and keyword for exact filtering (Category, Tags).
Step 4: High-Performance Bulk Indexing #
Indexing documents one by one using a loop is a significant bottleneck. Network latency will kill your throughput. The go-elasticsearch library provides a helper package esutil for bulk operations.
Here is how to ingest data efficiently:
import (
"context"
"fmt"
"github.com/elastic/go-elasticsearch/v8/esutil"
"time"
)
func indexProducts(es *elasticsearch.Client, indexName string, products []Product) {
indexer, err := esutil.NewBulkIndexer(esutil.BulkIndexerConfig{
Index: indexName,
Client: es,
// Flush when the buffer is full or every second
FlushInterval: 1 * time.Second,
})
if err != nil {
log.Fatalf("Error creating the indexer: %s", err)
}
for _, p := range products {
data, err := json.Marshal(p)
if err != nil {
log.Printf("Cannot encode article %s: %s", p.ID, err)
continue
}
err = indexer.Add(
context.Background(),
esutil.BulkIndexerItem{
Action: "index",
DocumentID: p.ID,
Body: bytes.NewReader(data),
OnSuccess: func(ctx context.Context, item esutil.BulkIndexerItem, res esutil.BulkIndexerResponseItem) {
// Optional: minimal logging
},
OnFailure: func(ctx context.Context, item esutil.BulkIndexerItem, res esutil.BulkIndexerResponseItem, err error) {
if err != nil {
log.Printf("ERROR: %s", err)
} else {
log.Printf("ERROR: %s: %s", res.Error.Type, res.Error.Reason)
}
},
},
)
if err != nil {
log.Fatalf("Error adding item to indexer: %s", err)
}
}
if err := indexer.Close(context.Background()); err != nil {
log.Fatalf("Unexpected error: %s", err)
}
stats := indexer.Stats()
log.Printf("Indexed %d documents with %d errors", stats.NumFlushed, stats.NumFailed)
}Step 5: Implementing the Search Logic #
Now for the main event. We want a query that:
- Searches for keywords in
Title(boosted importance) andDescription. - Filters by a minimum price.
- Allows for “Fuzziness” (handling typos like “laptap” instead of “laptop”).
import (
"bytes"
"encoding/json"
)
func searchProducts(es *elasticsearch.Client, indexName string, query string) {
var buf bytes.Buffer
// Constructing the JSON query
queryBody := map[string]interface{}{
"query": map[string]interface{}{
"bool": map[string]interface{}{
"must": []interface{}{
map[string]interface{}{
"multi_match": map[string]interface{}{
"query": query,
"fields": []string{"title^2", "description"}, // Title is 2x more important
"fuzziness": "AUTO",
},
},
},
"filter": []interface{}{
map[string]interface{}{
"range": map[string]interface{}{
"price": map[string]interface{}{
"gte": 10, // Only items > $10
},
},
},
},
},
},
}
if err := json.NewEncoder(&buf).Encode(queryBody); err != nil {
log.Fatalf("Error encoding query: %s", err)
}
res, err := es.Search(
es.Search.WithContext(context.Background()),
es.Search.WithIndex(indexName),
es.Search.WithBody(&buf),
es.Search.WithTrackTotalHits(true),
es.Search.WithPretty(),
)
if err != nil {
log.Fatalf("Error getting response: %s", err)
}
defer res.Body.Close()
if res.IsError() {
log.Fatalf("[%s] Error searching: %s", res.Status(), res.String())
}
// Parse the response
var r map[string]interface{}
if err := json.NewDecoder(res.Body).Decode(&r); err != nil {
log.Fatalf("Error parsing the response body: %s", err)
}
log.Printf(
"Found %d hits in %dms",
int(r["hits"].(map[string]interface{})["total"].(map[string]interface{})["value"].(float64)),
int(r["took"].(float64)),
)
// Iterate over hits
for _, hit := range r["hits"].(map[string]interface{})["hits"].([]interface{}) {
source := hit.(map[string]interface{})["_source"]
log.Printf(" * ID=%s, Title=%s", hit.(map[string]interface{})["_id"], source.(map[string]interface{})["title"])
}
}Comparison: SQL vs. Elasticsearch #
Why go through this trouble? Here is a breakdown of why we move search logic out of the primary relational database.
| Feature | SQL Database (Postgres/MySQL) | Elasticsearch |
|---|---|---|
| Search Method | Exact match or LIKE %term% |
Inverted Index (Tokens) |
| Typo Tolerance | Very Hard (Requires extensions) | Native (Fuzzy Search) |
| Relevancy Scoring | No (Boolean Results) | Yes (BM25 Algorithm) |
| Scalability | Vertical Scaling (Expensive) | Horizontal Sharding (Native) |
| Data Structure | Normalized Tables | Denormalized JSON Documents |
| Write Speed | ACID compliant (Slower) | Near Real-time (Fast) |
Best Practices and Common Pitfalls #
1. Connection Pooling #
The go-elasticsearch client uses http.Transport under the hood. It supports keep-alive connections by default. Do not create a new client for every request; initialize it once (singleton pattern) and reuse it.
2. Deep Paging #
Avoid letting users jump to page 50,000 of results. Using from and size parameters works for shallow pagination, but deep pagination kills CPU. Use Search After or the Scroll API (for exports) if you need to access deep data.
3. Handling Version Conflicts #
If you are updating documents frequently, you will encounter version_conflict_engine_exception. Use the RetryOnConflict parameter in your update calls if your logic allows for last-write-wins, or implement optimistic locking using _seq_no and _primary_term.
4. Structuring for AdSense and SEO #
Wait, this is meta—but when building your public-facing search pages, ensure your Go application renders the search results server-side (SSR) if possible, or ensure your sitemap.xml is updated based on popular search terms to drive traffic.
Wiring It All Together #
Update your main function to run the full lifecycle:
func main() {
// 1. Init Client
es, err := getESClient()
if err != nil {
log.Fatal(err)
}
indexName := "products_v1"
// 2. Create Index
createIndex(es, indexName)
// 3. Mock Data
products := []Product{
{ID: "1", Title: "Gaming Laptop Pro", Description: "High performance gaming laptop", Price: 1200.00, Category: "Electronics"},
{ID: "2", Title: "Office Mouse", Description: "Wireless mouse for work", Price: 25.00, Category: "Electronics"},
{ID: "3", Title: "Coffee Maker", Description: "Brews great coffee", Price: 55.00, Category: "Home"},
}
// 4. Index Data
indexProducts(es, indexName, products)
// Wait for ES to refresh (simulate near-real-time delay)
time.Sleep(2 * time.Second)
// 5. Search
fmt.Println("--- Searching for 'laptop' ---")
searchProducts(es, indexName, "laptop")
fmt.Println("--- Searching for 'mousse' (typo test) ---")
searchProducts(es, indexName, "mousse")
}Conclusion #
Integrating Elasticsearch with Golang allows you to build search experiences that feel instantaneous. By leveraging the esutil package for bulk processing and understanding the Typed API vs. the standard API, you can maintain a clean, performant codebase.
As we move through 2025, the demand for semantic search (vector search) is rising. The good news is that the setup you just built is fully compatible with Elasticsearch’s kNN vector search capabilities—a topic we will cover in an upcoming “Deep Dive” article.
Further Reading:
Get your hands dirty, run the code, and stop using SQL LIKE queries for user-facing search!