Querying your OpenSearch index
In the old days of Connections, we would check our ElasticSearch indexes through Kibana, but Kibana is no longer part of an HCL Connections Component pack installation. In HCL Connections 8, OpenSearch is a core component powering features like Orient Me, Type-Ahead Search, and Metrics. As an administrator, it’s critical not just to ensure that OpenSearch is running, but to verify that it’s indexing, storing, and serving content correctly.
Being able to query OpenSearch directly allows you to:
- 🔎 Inspect whether key indexes (e.g.
quickresults
,orientme
) exist and are populated - 📦 Monitor document counts, index sizes, and shard health to detect imbalance or data loss
- 🧩 Analyze mappings and field types to understand how data is structured and queried
- 🛠️ Troubleshoot user-facing issues (e.g., missing search results, outdated content)
- 💾 Interact with snapshot and restore APIs for backup validation and disaster recovery
This article focuses on practical OpenSearch queries using the REST API, not directly through curl, as most documentation will tell you, but though a custom sendRequest.sh script, which will make your life a bit easier. This will give you the tools to monitor, inspect, and manage your OpenSearch deployment effectively in a Connections environment.
Where do you want to check your indexes?
This is the first question that you need to ask yourself. Do you want to be able to check the indexes just from within an OpenSearch pod? From the host OS of a control-plane node? Or do you want to be able to check it from anywhere through a management interface? You can make this fancy, but for me the 2nd option was enough. If I can logon to my first control-plane node and use a command there to query the index, that’s good enough for me. So that’s what I’ll describe here.
HCL has described here how to check the indexes (thanks Urs Meli for pointing me the way!). This document talks mainly about ElasticSearch. However, OpenSearch is a fork of ElasticSearch 7.10, so most commands for ElasticSearch do work fine on OpenSearch.
Steps to take
So in the document, you can find that there’s a sendRequest.sh, which you can use. To be able to use it, you need 3 certificates from the opensearch-secrets in Kubernetes and if you want to use it from a control-plane host OS, you need to forward the port from the pod to the host.
The sendRequest.sh script
You can find the script in the openSearch-cluster-client pod as /usr/share/opensearch/probe/sendRequest.sh, but before you go and get it there, hang on. I’ll share my version of the script later, which contains some extra scripting for when you run it from the control-plane node’s host OS.
Getting the 3 certificates
You need 3 certificates to make a proper connections:
- chain-ca.pem
- opensearch-admin.crt.pem
- opensearch-admin-nopass.key
You need to save these on your host. I put mine in /opt/sendRequest/certs. The commands to put them there are:
kubectl get secret opensearch-secret -n connections -o jsonpath="{.data['chain-ca.pem']}" | base64 -d > /opt/sendRequest/certs/ca.crt
kubectl get secret opensearch-secret -n connections -o jsonpath="{.data['opensearch-admin.crt.pem']}" | base64 -d > /opt/sendRequest/certs/client.crt
kubectl get secret opensearch-secret -n connections -o jsonpath="{.data['opensearch-admin-nopass.key']}" | base64 -d > /opt/sendRequest/certs/client.key
Forward the port
As I have the scripts and certificates on one node only, the cleanest option to access the opensearch-cluster-master service is by forwarding the port on that host only. De command to forward port 9200 of the opensearch service to port 19200 local is:
kubectl port-forward svc/opensearch-cluster-client -n connections 19200:9200
This is not persistent, so I added some code to the sendRequest script to check if the port was forwarded and forward it if it isn’t.
Put everything in place
This is my sendRequest.sh script (which I put in /opt/sendRequest)
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
# === Configuration ===
cert_dir=/opt/sendRequest/certs
forward_port=19200
namespace=connections
service=opensearch-cluster-client
target_port=9200
# === Check if the port is forwarded already ===
if ! nc -z localhost $forward_port 2>/dev/null; then
echo "[INFO] Start port-forward: $forward_port -> $service:$target_port"
kubectl port-forward svc/$service -n $namespace $forward_port:$target_port > /dev/null 2>&1 &
# Wait up to 10 seconds for the port-forward to become active
for i in {1..10}; do
sleep 1
if nc -z localhost $forward_port; then
echo "[INFO] Port-forward active on localhost:$forward_port"
break
fi
if [[ $i -eq 10 ]]; then
echo "[ERROR] Port-forward didn't become active within 10 seconds." >&2
exit 1
fi
done
else
echo "[INFO] Port-forward to localhost:$forward_port is already active"
fi
# === Prepare Request ===
_method=$1
shift 1
URL_base="https://localhost:$forward_port"
set -f
response_text=$(curl -s \
--cert $cert_dir/client.crt \
--key $cert_dir/client.key \
--cacert $cert_dir/ca.crt \
-X${_method} \
${URL_base}"$@")
# === Show response ===
echo "${response_text}"
set +f
The script uses the nc command, so you might need to install it if it’s not there already:
dnf install -y nc
I created a symbolic link to be able to use if from wherever:
ln -s /opt/sendRequest/sendRequest.sh /usr/local/bin/sendRequest
That’s it.
What can you do now?
Just some examples of what you can do now
Index examples
# Check the opensearch indexes
sendRequest GET /_cat/indices?v
# Result:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open icmetrics_a_2025_1h h99RZIJCQnOb0avA0wg3Bg 5 1 332881693 0 113.2gb 56.6gb
green open security-auditlog-2025.03.12 Zh3X5eskQdmuPH8GzxHyWw 1 1 38872 0 19.2mb 9.6mb
yellow open quickresults K4A-RluOT7SoJcnJ7V-sKA 8 2 5657222 872096 10gb 3.7gb
Notice the yellow status for the quickresults, which sometimes happens when your cluster isn’t replicating properly. Let’s look at the meaning of these values quickly:
Field | Description |
---|---|
yellow | Cluster health status for the index A yellow status means:All primary shards are available. Not all replica shards are allocated. |
open | Index status. open means the index is active (closed means it’s disabled). |
quickresults | The name of the index. |
K4A-RluOT7SoJcnJ7V-sKA | The unique UUID of the index within the cluster. |
8 | Number of primary shards. |
2 | Number of replica shards per primary (so total expected replicas = 8×2). |
5657222 | Number of documents in the index. |
872096 | Number of deleted documents (not yet purged). |
10gb | Total size of all primary shards. |
3.7gb | Total size of all replica shards that are currently available. |
Note: the quickresults index is provisioned with 8 primary shards by default. If your deployment is large, consider adjusting via HCL’s admin config before creation for optimal performance. See here. You can monitor the shard count through:
sendRequest GET /_cat/indices/quickresults?h=index,pri,rep,docs.count,store.size,status
Other index Overview & Health
sendRequest GET /_cluster/health?pretty
sendRequest GET /_cat/shards?v
sendRequest GET /_cat/allocation?v
Index Mapping & Settings
sendRequest GET /my-index/_mapping?pretty. # for example: GET /quickresults/_mapping?pretty
sendRequest GET /my-index/_settings?pretty
sendRequest GET /my-index/_stats/docs,store?pretty
Document Counts & Storage
sendRequest GET /my-index/_stats/docs,store?pretty
Search examples
- URI search:
GET /my-index/_search?q=field:value
- Body search:
GET /my-index/_search { "query": { "match": { "field": "value" } }, "size": 10 }
Snapshot / backup APIs
These are fully supported in OpenSearch (forked from Elasticsearch 7.10):
PUT /_snapshot/my_repo { "type":"fs", "settings": { "location":"/mnt/backups" } }
PUT /_snapshot/my_repo/snap1?wait_for_completion=true
GET /_snapshot/my_repo/_all
POST /_snapshot/my_repo/snap1/_restore
Clarify: ensure path.repo
includes your backup location, and that security/permissions allow snapshots
Real-world examples
Add context, e.g.:
- Discover stale indexes:
sendRequest GET /_cat/indices?v&s=age:desc
- Quick storage check:
sendRequest GET /_cat/indices?h=index,pri,rep,docs.count,store.size
- Search across indexes using
_index
:sendRequest GET /_search { "query": { "terms": { "_index": ["logs-2025.06.30","logs-2025.07.01"] } } }