1. Introduction
1.1 Purpose of Document
This document serves as a comprehensive document for the on-premise deployment of the HCL BigFix Service Management on a Kubernetes-based environment. It provides architecture considerations, high-level and low-level designs, assumptions and security considerations. The application is owned, developed, and maintained by HCL Software, while the underlying infrastructure—including , but not limited to, compute resources, networking, storage, security policies, backup etc is fully managed and maintained by the client.
The goal of this guide is to provide information about the the architecture of the deployment ensuring all stakeholders are aligned on their responsibilities and expectations throughout the lifecycle of the application.
1.2 Scope of Document
This document covers:
- Architecture and infrastructure prerequisites for deploying the application on client-managed IT Infrastructure.
- High-level design (HLD) of the application, including system components, data flows, and integration points.
- Low-level design (LLD) with specific deployment artifacts Kubernetes Namespaces
- Roles and Responsibilities of different stake holders involved
1.3 Out of Scope
This document does not intend to provide the guide on how to use the application and various components of the application. Additionally this document will also not provide instructions on how to deploy and upgrade the application.
1.4 Intended Audience
This document is intended for the following stakeholders:
- Client IT Infrastructure Teams: For understanding the infrastructure requirements and supporting the setup of the environment.
- Solution Architects: For reviewing architecture compliance and security requirements.
- Application Support Teams: For post-deployment operations and maintenance.
A foundational understanding of Kubernetes, containerized applications, and Linux-based systems is assumed for readers of this document.
2 Glossary of Terms Used
Term | Definition |
---|---|
Kubernetes (K8s) | An open-source system for automating deployment, scaling, and management of containerized applications. |
Pod | The smallest deployable unit in Kubernetes that can contain one or more containers. |
Service | A Kubernetes resource that exposes a logical set of Pods and a policy to access them. |
StatefulSet | A Kubernetes controller used for managing stateful applications. |
Helm | A package manager for Kubernetes that allows defining, installing, and upgrading applications using charts. |
PVC (Persistent Volume Claim) | A request for storage resources by a user in Kubernetes. |
Ingress | A Kubernetes API object that manages external access to services, typically HTTP/S traffic. |
ConfigMap | A Kubernetes resource used to store non-confidential configuration data such as application settings. |
Secret | A Kubernetes resource used to store confidential data like passwords, tokens, and SSH keys securely. |
RBAC (Role-Based Access Control) | A security mechanism in Kubernetes to regulate access to resources based on the roles assigned to users or applications. |
3 Deployment Overview
HCL BigFix Service Management is designed as a containerized, microservices-based application that runs on Kubernetes. In the on-premise deployment model:
- The client provides and manages the infrastructure, including but not limited to , compute resources, networking, storage, load balancers, DNS, and related system software (including patching and upgrades)
- HCL Software owns the application—responsible for delivering application containers, Kubernetes manifests (YAML files or Helm charts), configuration artifacts, and initial deployment. The support for this application may be provided by partners of HCL Software or the persons authorized by HCL Software or may be owned by client depending on the contractual agreement.
This separation allows the client to maintain control over their IT landscape while benefiting from a consistent, vendor-supported application model.
3.1 Ownership and Responsibilities Matrix
The following matrix outlines the division of responsibilities between the client and HCL Software. This matrix outlines the standard division of responsibilities based on current understanding and agreed deployment practices. Any component, task, or activity not explicitly listed shall be considered as out of scope for HCL Software. In case of ambiguity or evolving requirements, ownership can be jointly reviewed and agreed upon through mutual discussion between the client and HCL Software stakeholders.
Component / Task | Client Responsibility | HCL Software Responsibility |
---|---|---|
IT Infrastructure Provisioning | ![]() |
![]() |
Kubernetes Cluster setup | ![]() |
![]() |
Node/VM Provisioning & OS Patching | ![]() |
![]() |
Networking & Firewall Rules | ![]() |
![]() |
Load Balancer Configuration | ![]() |
![]() |
DNS Configuration | ![]() |
![]() |
Persistent Storage Setup | ![]() |
![]() |
TLS/SSL Certificates | ![]() |
![]() |
Mailbox / SMTP Provisioning | ![]() |
![]() |
Encryption of Data | ![]() |
![]() |
Security and Monitoring | ![]() |
![]() |
Data Backup and Restore | ![]() |
![]() |
Replication of data between locations | ![]() |
![]() |
DR Invocation and Restoration | ![]() |
![]() |
Security Patching and Upgrades (All components) | ![]() |
![]() |
Application Container Images | ![]() |
![]() |
Kubernetes Manifests / Helm Charts | ![]() |
![]() |
Application Configuration | ![]() |
![]() |
Application Upgrades | ![]() |
![]() |
Configuration Templates | ![]() |
![]() |
CI/CD Integration (if needed) | ![]() |
![]() |
Deployment Execution | ![]() |
![]() |
Monitoring & Logging Integrations | ![]() |
![]() |
Post-Deployment Support | ![]() |
![]() |
“Shared” indicates that collaboration is required between both parties to ensure proper setup and operational continuity.
3.2 Deployment Environment Requirements
For a successful deployment of the application on an on-premise Kubernetes environment, specific responsibilities and prerequisites must be fulfilled by both parties.
3.2.1 Infrastructure Provisioning & Maintainance (Client Responsibility)
The client is responsible for provisioning and maintaining the underlying infrastructure as per the hardware and resource specifications shared during the RFP phase. This includes:
- Provisioning of virtual machines (VMs) with the required CPU, memory, disk, and OS configuration.
- VMs to be provisioned with supported operating system (Ubuntu 24.04 LTS or RHEL 8/9 LTS (x86_64) )
- Ensuring high availability, redundancy, and sufficient capacity for scaling.
- Network infrastructure (switches, firewalls, load balancers) setup and routing.
- Storage infrastructure (block/file-based) accessible by the VMs.
- A UI jump server (bastion host) for the implementation team to access the infrastructure securely
- Remote access to the IT Infrastructure for the deployment team
- During the installation and setup, internet access or appropriate egress policies to access required external services/portals
3.2.2 Application Provisioning (HCL Software Responsibility)
HCL Software or it’s partner will be responsible for:
- Installing and configuring the Kubernetes cluster on the client-provided VMs.
- Deploying supporting components like ingress controllers, certificate managers, and metrics servers.
- Preparing the environment for application deployment (namespaces, RBAC, etc.).
- Deploying and configuring the application
3.2.3 Readiness Checklist
Below is the checklist, with bare minimum requirement, meant to serve as a baseline reference. The client may refer to their own infrastructure onboarding or environment readiness checklist in addition to this one to ensure full alignment with their operational standards
Checklist Item | Description |
---|---|
VMs Provisioned | All required virtual machines are provisioned in accordance with the shared specifications. These VMs will host the Kubernetes control plane and worker nodes. |
Operating System Installed and Patched | A supported Linux distribution is installed on all VMs, with the latest security patches and kernel updates applied. |
Internal Connectivity Established | All provisioned VMs should allow all internal communication with each other over required internal ports (e.g., 6443 for Kubernetes API, 2379 for etcd, 10250 for kubelet). This includes both control plane, worker nodes, microservices, and other components. |
External Network Access | Internet access or an internal proxy is available for the VMs, allowing access to external container registries for pulling application and Kubernetes system images. |
Storage Connectivity Configured | Shared or dedicated storage, both mounted and unmounted, and accessible on the required nodes. This storage will be used for persistent volume claims by the application. |
DNS Resolution Verified | Proper DNS configuration is in place. All cluster nodes must be able to resolve internal and external hostnames. Reverse DNS lookup should also be tested for compatibility with cluster components. |
Time Synchronization Enabled | Network Time Protocol (NTP) or an equivalent mechanism is configured across all VMs to ensure consistent system time, which is critical for distributed system coordination. |
Firewall and Security Rules Applied | Necessary firewall ports are opened and security group rules applied to allow inter-node communication and cluster operations. |
Load Balancer Provisioned | If the application or control plane requires a load balancer, appropriate IP addresses or FQDNs must be allocated and routing configured. |
SSL/TLS Certificates Provisioned | Application requires HTTPS endpoints or mutual TLS authentication, certificates should be provisioned in advance by the client’s IT/security team. |
SMTP/Mailbox Configuration Shared | Details of the client’s internal email relay server or SMTP configuration should be provided for the application to send email notifications. |
Monitoring Integration Ready (Optional) | If application monitoring integration is planned, endpoint access and credentials for tools should be prepared and shared with the deployment team. |
3.3 Deployment Scenarios
The application supports the following deployment environments:
- Staging: For testing and internal validation.
- Production: Full-scale deployment with HA ( if applicable) to support the agreed upon volume of requests
Each environment must be provisioned with separate namespaces and configuration profiles to ensure isolation and reliability.
3.4 Security and Patching Responsibility
The client is solely responsible for applying operating system (OS) patches, Kubernetes cluster and component upgrades, and remediating security vulnerabilities across the infrastructure stack (including, but not limited to, Kubernetes, OS, network, storage, and supporting middleware). As a software provider, HCL Software is responsible only for the maintenance of the application which will be provided as version updates related to the application itself. The secure usage, configuration, and operational governance of the application within the client’s environment—including identity management, role-based access control, and network-level controls—remain the client’s responsibility.
4 High Level Design
4.1 System Components Overview
HCL BigFix Service Management is a cloud-native, containerized solution composed of multiple microservices that communicate via REST APIs and message queues. It is designed for scalability, fault tolerance, and seamless integration within enterprise ecosystems. The application architecture is layered as follows:
- Frontend/UI Layer: A web-based interface exposed via Ingress and served over HTTPS.
- API Layer: Stateless backend services that expose RESTful APIs. These services handle user requests, business logic, and service orchestration.
- Service Layer: Microservices performing domain-specific processing, communicating via internal service mesh or HTTP/gRPC.
- Data Layer: Persistent data stores, such as relational databases (PostgreSQL) and NoSQL systems (Redis, MongoDB), accessed through internal Kubernetes services.
- Infrastructure/Platform Layer: Supporting components such as message brokers (e.g., Kafka), caching services, observability tools , and security services.
4.2 Logical Diagram
Below diagram illustrates the high-level logical architecture of HCL BigFix Service Management deployed on a Kubernetes cluster, highlighting the key components involved in user interaction, application deployment, and cluster orchestration.
End User Interaction
- The End User interacts with the application through a user interface or API endpoint.
- All traffic from the end user is routed via a Load Balancer, which acts as the entry point to the Kubernetes environment.
Load Balancer
- The Load Balancer distributes incoming traffic to the appropriate backend services within the Kubernetes cluster.
- It ensures high availability and fault tolerance by managing requests across multiple pods or services.
- It also abstracts the underlying services, presenting a unified access point to the user.
Kubernetes Cluster
The cluster is divided into two primary layers:
a. Kubernetes Master Plane
This layer contains the control components responsible for managing the state and orchestration of the cluster:
- API Server: The front-end for the Kubernetes control plane; receives and validates requests.
- Controller: Watches the state of the cluster and attempts to maintain the desired state.
- Scheduler: Assigns workloads (pods) to available nodes based on resource availability and constraints.
b. Worker Node Layer
Labelled here as “Workernode”, this is where the actual application workloads run. The number of workernodes will depend on the number of workernodes required to support the identified volume of requests It’s further divided into:
- Application Pods: These host the core business logic and microservices of the application.
- Database Pods: These provide persistent data storage services required by the application (e.g., PostgreSQL, MongoDB).
- Integration Engine Pods: These handle integration with external systems, third-party services, or legacy applications via RESTful APIs
4.3 Deployment Topology
The application follows a modular deployment topology:
- Namespaces: Separate namespaces for different uses.
- Horizontal Scaling: Stateless services support auto-scaling based on CPU/memory metrics.
- Affinity Rules: Microservices are grouped using node affinity/anti-affinity rules for optimized placement.
- Storage: Stateful services (DBs, brokers) use Kubernetes PersistentVolumeClaims backed by client-provided storage.
4.4 External Integrations
The application supports integrations with various enterprise systems via configurable endpoints and APIs. Integration Engine which acts as an integration hub can enable integrations with:
- Email Server (SMTP): For system alerts, reports, and notifications.
- LDAP/AD: For user authentication and SSO.
- Monitoring Tools: Integration with tools like Moogsoft, iAutomate.
- Multiple ITSM Platforms: Integrations with any third party ITSM tools
5 Low Level Design
5.1 Kubernetes Resource Specifications
The application is deployed using the following core Kubernetes resources:
Resource Type | Purpose |
---|---|
Deployment |
Manages stateless application pods and ensures replica consistency. |
StatefulSet |
Used for stateful components such as databases, ensuring stable network IDs. |
Service |
Exposes application components internally and externally (ClusterIP, NodePort, LoadBalancer). |
Ingress |
Routes external HTTP/S traffic to services using domain-based rules. |
ConfigMap |
Stores non-sensitive configuration values that are injected into containers. |
Secret |
Stores sensitive information such as DB passwords, tokens, and TLS certs. |
PersistentVolumeClaim (PVC) |
Requests storage resources from client-provided infrastructure. |
Namespace |
Segregates application environments |
5.2 Helm Chart and Templating
The deployment uses Helm to manage Kubernetes manifests. Helm charts package all deployment resources, enabling:
- Parameterized deployments using values.yaml
- Environment-specific configurations
- Easy rollback and versioning
- Integration with CI/CD pipelines
5.3 Namespace Strategy
Namespaces are used to isolate various components of deployment:
- Prod/Staging, Integration-Engine, Kafka etc – usage specific namespaces
- ingress-nginx – Dedicated namespace for Ingress controllers
Role-Based Access Control (RBAC) is applied at the namespace level to enforce least privilege.
5.4 Persistent Storage
- Databases and stateful workloads use PVCs.
- The storage backend (e.g., NFS, iSCSI, Ceph) is client-provided and configured prior to deployment.
5.5 Configuration Management
ConfigMaps
andSecrets
are used to externalize configuration.- All secrets are stored encrypted and mounted as environment variables or volumes.
- Changes to configurations can be rolled out dynamically through re-deployments or reload endpoint
5.6 Health Checks and Auto-recovery
livenessProbe
andreadinessProbe
are configured on all pods.- Kubernetes ensures:
- Self-healing through restarts
- Graceful rollouts via rolling updates
- Rescheduling on healthy nodes in case of failures
5.7 Network Policies and Security
- Internal communication between pods is controlled using Kubernetes NetworkPolicies.
- Secrets are managed through Kubernetes Secrets and optionally integrated with a Vault provider.
- TLS is enabled at ingress level for secure communication from external clients.
5.8 Key Components of Deployment
Below information is for the core components for HCL BigFix Service Management deployment and is indicative in nature. While the architecture would remain same the actual names of deployments/pods/replica sets etc. would vary by each client for their deployment. The details of application and database pods is not described in this document but follows the same structure. The detailed diagram for each deployment may be provided, upon request, for each client for their specific deployment
5.8.1 Kubesystem
System Components Overview
K8s Instance | Application / Component | Type | Sub-components / Pods | |
---|---|---|---|---|
rke2-snapshot-controller |
rke2-snapshot-controller |
Deployment | ReplicaSet: rke2-snapshot-controller-58dbfcd956 → Pod: -47kmv |
|
rke2-metrics-server |
rke2-metrics-server |
Deployment | ReplicaSet: rke2-metrics-server-58ff89fc7 → Pod: -bcp84 |
|
rke2-ingress-nginx |
` | rke2-ingress-nginx-controller |
DaemonSet | Multiple Pods: -jgjhr , -n2qsc , -qpl59 , -xw59x |
rke2-ingress-nginx-controller-admission |
Service | Handles webhook validation and admission traffic | ||
rke2-coredns |
rke2-coredns , rke2-coredns-autoscaler |
Deployment | ReplicaSets → Pods: -ttlvh , -xkfv6 , -4pzzp |
Service and Network Connections
Source Component | Target Component | Port(s) | Protocol | Purpose / Notes |
---|---|---|---|---|
Ingress-NGINX Controller (Pods) | Webhook Service (controller-admission ) |
443 |
TCP | Admission validation for Ingress resources |
Ingress-NGINX Webhook Service | API Server (Implicit) | 443 |
TCP | Communicates with Kubernetes API for webhook registration/validation |
Metrics Server (Pod) | API Server (Implicit) | 443 |
TCP | Collects resource usage statistics from kubelets |
CoreDNS (Pod) | Internal Services (Other Pods, Cluster DNS queries) | 53 |
UDP/TCP | Cluster DNS resolution |
Snapshot Controller | Persistent Volume API (Implicit) | N/A | - | Handles snapshot lifecycle for PVCs |
5.8.2 Integration Engine
System Components Overview
Component | Type | Name | Description | ||
---|---|---|---|---|---|
NiFi Stateful App | StatefulSet | ie-nifi |
Manages NiFi pod lifecycle and ensures stable identity | ||
Headless Service | Service | ie-nifi-headless |
Provides internal DNS resolution for NiFi cluster pods | ||
Metrics Service | Service | ie-nifi-metrics |
Exposes internal metrics endpoint | ||
Main Access Service | Service | ie-nifi |
Provides external access to NiFi instance | ||
Persistent Volume | PVC | data-ie-nifi |
Mounted volume for NiFi data | ||
Application Pod | Pod | ie-nifi-0 |
The first and primary pod managed by the StatefulSet |
Service and Network Port Mapping
Source | Target | Port(s) | Protocol | Purpose |
---|---|---|---|---|
ie-nifi-headless |
ie-nifi-0 |
8443, 6007, 10000 | TCP | Internal service discovery and communication |
ie-nifi-metrics |
ie-nifi-0 |
9092 | TCP | Prometheus-compatible metrics exposure |
ie-nifi |
ie-nifi-0 |
8443, 7002, 9900 | TCP | 8443: For communication with external applications 7002, 9900: Used only for internal communication within application |
ie-nifi (STS) |
data-ie-nifi |
Volume Mount | N/A | Stores state, flow configuration, and runtime data |
5.8.3 K8 Storage Manager
System Components Overview
Namespace | Component Name | Type | Resource | Description |
---|---|---|---|---|
openebs |
openebs-ndm-operator |
Deployment | ReplicaSet → Pod: cb7bf6676 → wc4ll |
NDM operator for managing disk inventory |
openebs |
openebs-localpv-provisioner |
Deployment | ReplicaSet → Pod: 7c44946ddc → d46kp |
Local PV provisioner managing local volumes |
openebs |
openebs-ndm-cluster-exporter |
Deployment | ReplicaSet → Pod: 6d8759c9b → ctq6m |
Exposes NDM cluster metrics via Prometheus |
openebs |
openebs-ndm-cluster-exporter-service |
Service | Cluster-wide metrics endpoint | Exposes metrics at port 9100/TCP |
openebs |
openebs-ndm-node-exporter-service |
Service | Node-level metrics endpoint | Exposes node metrics at port 9101/TCP |
openebs |
openebs-ndm |
DaemonSet | Pods: xlf5z , 5sl4l , ghz8b , qr8d8 |
Node Disk Manager for block device discovery |
openebs |
openebs-ndm-node-exporter |
DaemonSet | Pods: lhjl6 , nxf8d , z5nsr , h2c6z |
Per-node exporter for disk metrics |
Service and Port Mapping
Source | Target | Port(s) | Protocol | Purpose / Notes |
---|---|---|---|---|
openebs-ndm-cluster-exporter-service |
openebs-ndm-cluster-exporter |
9100 |
TCP | Prometheus scraping endpoint for cluster metrics |
openebs-ndm-node-exporter-service |
openebs-ndm-node-exporter pods |
9101 |
TCP | Prometheus endpoint exposing per-node disk metrics |
openebs-ndm DaemonSet |
Local Nodes | N/A | N/A | Manages block device discovery and monitoring |
openebs-localpv-provisioner |
Kubernetes Storage API | N/A | N/A | Provisions local persistent volumes |
5.8.4 Object Storage
System Components Overview
Component Name | Type | Resource | Description |
---|---|---|---|
minio |
Deployment | Manages replica sets and pod creation | Deploys and maintains the object storage server |
minio-d977b7cfb |
ReplicaSet | Generates and maintains minio-d977b7cfb-d48q4 |
ReplicaSet managing the active pod |
minio-6f7c49b8c8 |
ReplicaSet | Present in history, possibly from earlier rollout | An older or parallel ReplicaSet |
minio |
Service | Exposes ports 9000 and 9001 | Access layer for API and UI traffic |
minio-d977b7cfb-d48q4 |
Pod | Running MinIO application instance | Active container providing S3-compatible object store |
Service and Port Mapping
Source | Target | Port(s) | Protocol | Purpose / Notes |
---|---|---|---|---|
minio (Service) |
minio-d977b7cfb-d48q4 (Pod) |
9000 |
TCP | MinIO API endpoint (S3-compatible interface) |
minio (Service) |
minio-d977b7cfb-d48q4 (Pod) |
9001 |
TCP | MinIO Console (web-based UI for management) |
5.8.5 Cache
System Components Overview
Component Name | Type | Resource | Description |
---|---|---|---|
redis-master |
StatefulSet | Manages Redis pod lifecycle | Ensures ordered, stable deployment of Redis master instance |
redis-master |
Service | Exposes Redis master to internal clients | Handles Redis traffic on port 6379 |
redis-headless |
Headless Service | Internal DNS-based service | Enables direct pod-to-pod communication |
redis-metrics |
Service | Exposes Redis metrics | Publishes Prometheus-compatible metrics on port 9121 |
redis-master-0 |
Pod | Redis master instance pod | Active Redis server pod |
redis-data-redis-master |
PersistentVolumeClaim | Storage for Redis | Mounted by the Redis pod to persist data |
Service and Port Mapping
Source | Target | Port(s) | Protocol | Purpose / Notes |
---|---|---|---|---|
redis-master (Svc) |
redis-master-0 |
6379 |
TCP | Redis database access from other services |
redis-headless (Svc) |
redis-master-0 |
6379 |
TCP | Direct DNS-based access within the cluster |
redis-metrics (Svc) |
redis-master-0 |
9121 |
TCP | Exposes metrics for Prometheus or monitoring tools |
redis-master-0 (Pod) |
redis-data-redis-master (PVC) |
Volume Mount | N/A | Persistent storage for Redis database files |
5.8.6 Secret Vault
System Components Overview
Component Name | Type | Resource | Description |
---|---|---|---|
openbao |
StatefulSet | Manages lifecycle of OpenBao pods | Ensures stable identity and storage for OpenBao nodes |
openbao-0 |
Pod | Active OpenBao instance | Serves secrets and configuration at standard ports |
openbao |
Service | Internal access point to OpenBao pod | Exposes service over ports 8200 and 8201 |
openbao-internal |
Service | Internal service communication | Cluster-internal communication for peer pods |
data-openbao |
PVC | Persistent storage | Volume for storing OpenBao configuration and encrypted secrets |
openbao-agent-injector-svc |
Service | Sidecar injector service | Injects secrets into workloads via mutating webhook |
Service and Port Mapping
Source | Target | Port(s) | Protocol | Purpose / Notes |
---|---|---|---|---|
openbao (Svc) |
openbao-0 (Pod) |
8200, 8201 | TCP | Internal Microservice and API access |
openbao-internal (Svc) |
openbao-0 (Pod) |
8200, 8201 | TCP | Peer and internal cluster communication |
openbao-0 (Pod) |
data-openbao (PVC) |
Volume Mount | N/A | Persistent storage for OpenBao data and configs |
openbao-agent-injector-svc |
K8s API Server | Webhook | HTTPS | Injects secrets into annotated pods via admission controller hook |
6 Document Versioning & Updates
This is a living document and may be updated at any time to reflect changes in:
- Application architecture or features
- Deployment procedures or best practices
- Infrastructure or platform requirements
- Roles, responsibilities, or ownership matrices
Disclaimer:
This document may be revised without prior intimation.
Stakeholders are encouraged to refer to the latest version maintained in product knowledge base.