As integration solutions grow increasingly complex and microservice-driven, the need for high-performance, low-latency, and distributed caching becomes paramount. IBM App Connect Enterprise (ACE) provides an Embedded Global Cache (EGC) feature powered by WebSphere eXtreme Scale (WXS) to enable fast, in-memory data storage across integration nodes. While configuring EGC in traditional deployments is straightforward, enabling it within containerized environments—such as Docker or Kubernetes—requires deliberate setup and careful orchestration.

This article walks through the detailed steps of configuring Embedded Global Cache for ACE running in containers, with working code examples, YAML manifests, best practices, and troubleshooting tips. By the end, you will be equipped to deploy ACE containers with a fully operational embedded cache cluster ready for high-performance integration workloads.

What is Embedded Global Cache in ACE?

The Embedded Global Cache in App Connect Enterprise is a built-in caching solution that uses WebSphere eXtreme Scale (WXS) to provide:

  • In-memory key-value store

  • Cluster-wide data replication and availability

  • High-speed session and state management

The cache is available to all integration servers (runtimes) and can be used for temporary session data, token storage, lookup results, and more. EGC removes the need for external caching systems in many scenarios.

Architectural Overview in Containers

When running in containers, each ACE integration server instance must be configured to:

  1. Enable cache participation

  2. Discover and join a cluster

  3. Expose necessary ports (like 7800, 7801)

  4. Use shared configurations (e.g., via ConfigMaps in Kubernetes)

In a typical Kubernetes or OpenShift deployment, you’ll have:

  • Multiple ACE Integration Server Pods

  • A StatefulSet or Deployment

  • A Headless Service for peer discovery

  • Configurations mounted from ConfigMaps or passed as environment variables

Step-by-Step Configuration

Enable Global Cache in Integration Server Configuration

You can enable the cache using server.conf.yaml:

yaml
CacheManager:
embeddedCache: true
policy: embedded
enableSSL: false
discoveryAddress: my-cache-service.default.svc.cluster.local
listenerPort: 7800
clientPort: 7801

This file can be mounted into the container via a ConfigMap.

Create a ConfigMap in Kubernetes

yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ace-cache-config
data:
server.conf.yaml: |
CacheManager:
embeddedCache: true
policy: embedded
enableSSL: false
discoveryAddress: ace-cache-headless.default.svc.cluster.local
listenerPort: 7800
clientPort: 7801

Use kubectl apply -f configmap.yaml to apply.

Define a Headless Service

The headless service allows DNS resolution of individual pod IPs for peer discovery.

yaml
apiVersion: v1
kind: Service
metadata:
name: ace-cache-headless
spec:
clusterIP: None
selector:
app: ace-server
ports:
- name: listener
port: 7800
targetPort: 7800
- name: client
port: 7801
targetPort: 7801

This enables ACE containers to find and communicate with each other via cache cluster ports.

Create ACE Integration Server Deployment

Below is a sample Kubernetes StatefulSet for ACE:

yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ace-server
spec:
serviceName: ace-cache-headless
replicas: 3
selector:
matchLabels:
app: ace-server
template:
metadata:
labels:
app: ace-server
spec:
containers:
- name: ace
image: icr.io/ace/ace-server:12.0.11.0
ports:
- containerPort: 7800
- containerPort: 7801
- containerPort: 7600
env:
- name: ACE_CONFIG_FILE
value: /home/aceuser/server.conf.yaml
volumeMounts:
- name: config-volume
mountPath: /home/aceuser/server.conf.yaml
subPath: server.conf.yaml
volumes:
- name: config-volume
configMap:
name: ace-cache-config

The StatefulSet ensures stable pod identity and hostnames (ace-server-0, ace-server-1, etc.), which are crucial for reliable cache discovery.

Validating the Global Cache Functionality

Once deployed, validate the EGC setup using the mqsicacheadmin tool from inside a container shell:

bash
kubectl exec -it ace-server-0 -- bash
mqsicacheadmin -n default -c -v

Look for output like:

yaml
Cache: default
Number of peers: 3
Cluster status: ACTIVE

You can also create a simple message flow that uses the GlobalCache node to write and read entries to confirm runtime cache interaction.

Example Message Flow with Global Cache Nodes

In your ACE toolkit:

  1. Create a new message flow

  2. Add:

    • An HTTPInput node

    • A Compute node to build key-value pairs

    • A GlobalCachePut node

    • A GlobalCacheGet node

    • An HTTPReply node

Here’s a sample ESQL code snippet to put and get values:

esql
CREATE COMPUTE MODULE CacheFlow_Compute
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
DECLARE key CHARACTER 'user123';
DECLARE value CHARACTER 'session-token-xyz';
— Store value
SET Environment.Cache.Key = key;
SET Environment.Cache.Value = value;
PROPAGATE TO TERMINAL 1;
END;
END MODULE;

You can use the GlobalCacheGet to retrieve and verify the same key.

Testing Using Curl

After deploying the flow, send a test HTTP POST:

bash
curl -X POST http://<ace-server-ip>:7800/cacheTest

Then check the logs for successful storage and retrieval.

Troubleshooting Tips

Issue Resolution
Peers not discovered Ensure headless service is configured and accessible via DNS
Cache cluster not forming Check that ports 7800/7801 are open and not blocked
Cache operations fail Verify that embeddedCache: true is enabled
Pods crash Check for malformed server.conf.yaml via kubectl logs

Monitoring Cache Metrics

You can expose JMX metrics or ACE metrics using Prometheus exporters. Monitor metrics like:

  • egc_active_peers

  • egc_heap_usage

  • egc_put_operations

  • egc_get_operations

These help evaluate performance and detect stale or unbalanced caches.

Best Practices

  • Use StatefulSets for predictable pod naming and identity

  • Monitor cluster health periodically to detect split-brain or network partitions

  • Avoid single pod deployments in production for true distributed cache benefits

  • Secure cache communication in production using enableSSL: true

  • Limit TTL on cache entries for session-type data

Using External Cache Grid

If you’re scaling out further, consider using an external WXS grid instead of embedded cache. ACE supports connecting to a remote grid by adjusting server.conf.yaml:

yaml
CacheManager:
embeddedCache: false
policy: external
providerURL: my-wxs-grid:2809

This is ideal when cache size or resilience needs exceed what’s feasible inside pods.

Conclusion

In modern enterprise integration landscapes, the ability to manage state efficiently, reduce latency, and improve overall system responsiveness is not just a performance optimization—it’s a foundational requirement. IBM App Connect Enterprise (ACE), with its Embedded Global Cache (EGC) powered by WebSphere eXtreme Scale (WXS), offers a powerful built-in solution for in-memory, distributed caching that aligns perfectly with these needs.

When operating ACE in containerized environments—especially in Kubernetes or OpenShift clusters—correctly configuring and orchestrating the global cache can unlock massive benefits. These include reduced external dependencies, lightning-fast read/write operations, and state persistence across stateless microservice pods. However, the setup is non-trivial. It demands a strong grasp of Kubernetes concepts like StatefulSets, headless services, ConfigMaps, and inter-container communication.

Throughout this guide, we’ve walked through every essential step: enabling the cache in the server.conf.yaml, defining a cache-aware headless service, deploying ACE instances using Kubernetes manifests, validating the cache cluster with tooling like mqsicacheadmin, and even building a functional message flow using GlobalCachePut and GlobalCacheGet nodes. By following these steps, you ensure that your containerized ACE deployments not only function properly but scale effectively with high availability and performance.

Moreover, we’ve addressed common pitfalls and troubleshooting techniques—like missing ports, DNS resolution issues, and peer discovery failures—that can prevent your cache cluster from forming correctly. By proactively monitoring metrics, ensuring correct configuration, and embracing best practices like predictable pod naming and secured cache communication, your EGC deployment will be more robust, observable, and production-ready.

The benefits are tangible: global cache helps eliminate redundant service calls, store reusable data like session tokens and credentials, and maintain application context between microservices—all without the latency and complexity of relying on external systems like Redis or Memcached. And since EGC is embedded, you retain the advantages of tight integration, simplified management, and streamlined DevOps pipelines.

Looking ahead, organizations can consider even more advanced architectures, such as hybrid setups where embedded cache is used alongside external grid-based caching for specific workloads. The flexibility of ACE and its compatibility with both embedded and external cache configurations provides ample room to scale and adapt as business requirements evolve.

In conclusion, configuring Embedded Global Cache for App Connect Enterprise in containers is a strategic investment in system performance and resilience. Done right, it empowers integration developers to build faster, leaner, and more intelligent flows, with less code and better state management—all while staying within a fully container-native, cloud-friendly ecosystem. With proper planning, monitoring, and automation, you can turn this embedded capability into a competitive differentiator in your integration platform.

If you’re serious about building high-throughput, microservice-driven integration solutions, then leveraging Embedded Global Cache in your ACE container deployments should not be optional—it should be central to your architecture strategy.