Customizing Controller-Managed Resources via ConfigMap
The KRM controller creates and manages Kubernetes resources such as StatefulSets (for the monitor component) and DaemonSets (for the node component) as part of the Flexera Kubernetes Inventory Agent deployment. By default, these resources are created with standard configurations suitable for most environments. However, advanced users may need to customize specific aspects of these resources to meet organizational requirements, security policies, or performance needs.
The krm-controller-config ConfigMap provides a mechanism to apply custom configurations to controller-managed workloads without modifying the KRM controller deployment itself. This approach allows you to:
- Set resource limits and requests to control CPU, memory, and ephemeral storage consumption
- Configure liveness and readiness probes for better health monitoring
- Apply security context settings to restrict container capabilities
- Add custom labels to pods for organizational tracking and policy enforcement
Configuration through the ConfigMap is optional and should only be used when the default behavior does not meet your specific requirements. The KRM controller applies these configurations automatically when the ConfigMap is present in the cluster.
To use this feature, the flexera-krm-advanced-config ClusterRole must be bound to the KRM controller's service account, granting it permission to read ConfigMaps. For details about permissions, see ClusterRoles and Permissions for Full Kubernetes Agent (KRM).
ConfigMap Structure
The krm-controller-config ConfigMap follows standard Kubernetes ConfigMap conventions and contains a YAML-formatted configuration file defining container-specific customizations.
Basic structure:
apiVersion: v1
kind: ConfigMap
metadata:
name: krm-controller-config
namespace: <your-namespace>
data:
config.yaml: |
containers:
- name: <container-name>
labels:
<label-key>: <label-value>
resources:
limits:
cpu: <cpu-limit>
memory: <memory-limit>
ephemeral-storage: <storage-limit>
requests:
cpu: <cpu-request>
memory: <memory-request>
ephemeral-storage: <storage-request>
livenessProbe:
httpGet:
path: <probe-path>
port: <probe-port>
initialDelaySeconds: <delay>
periodSeconds: <period>
readinessProbe:
httpGet:
path: <probe-path>
port: <probe-port>
initialDelaySeconds: <delay>
periodSeconds: <period>
securityContext:
capabilities:
drop:
- <capability-name>
Key elements:
- metadata.name: Must be exactly
krm-controller-configfor the controller to recognize it - metadata.namespace: Must be the same namespace where the KRM controller is deployed
- data.config.yaml: Contains the YAML configuration for container customizations
- containers: An array of container configurations, each identified by the
namefield
Container Name Selector
The name field identifies which container the configuration applies to. This allows you to apply different configurations to different components of the Flexera Kubernetes Inventory Agent.
Attribute |
|
Type | String |
Valid Values |
|
Example |
|
Example:
containers:
- name: monitor
# Configuration specific to monitor component
The monitor container runs as part of the StatefulSet and collects cluster-wide resource information. You can specify multiple container configurations in the array to customize different components independently.
Labels
Custom labels allow you to add metadata to pods created by the controller. This is useful for:
- Organizational tracking and cost allocation
- Applying network policies or pod security policies
- Integration with monitoring and observability tools
- Compliance and governance requirements
Attribute |
|
Type | Map of string key-value pairs |
Example | |
Example:
containers:
- name: monitor
labels:
app: krm-controller
role: monitor
environment: production
cost-center: "12345"
team: infrastructure
Labels specified in the ConfigMap are added to the pods in addition to any default labels applied by the controller. Kubernetes label naming conventions apply, so keys and values must follow the label syntax rules.
Resource Limits and Requests
Resource limits and requests control the compute resources allocated to containers. Setting appropriate values helps ensure stable cluster operations and prevents resource contention.
- Limits: The maximum amount of resources a container can consume. If exceeded, the container may be throttled (CPU) or terminated (memory).
- Requests: The minimum amount of resources guaranteed to the container. Used by the Kubernetes scheduler to place pods on appropriate nodes.
Attribute |
|
Type | Object containing |
Resource Types |
|
Example | See below |
Example:
containers:
- name: monitor
resources:
limits:
cpu: "3"
memory: "4Gi"
ephemeral-storage: "8Gi"
requests:
cpu: "1"
memory: "1Gi"
ephemeral-storage: "8Gi"
Resource value formats:
- CPU: Specified in cores. Use whole numbers (
"2") or millicore values ("500m"for 0.5 cores) - Memory: Specified using binary suffixes:
Ki,Mi,Gi,Ti(e.g.,"2Gi"for 2 gibibytes) - Ephemeral storage: Same format as memory, controls temporary storage used by the container
Setting resource limits too low may cause containers to be throttled or terminated, impacting the agent's ability to collect inventory. Monitor resource usage in your environment to determine appropriate values. As a starting point, the example values above (1-3 CPU cores, 1-4 GiB memory) are suitable for most clusters.
Liveness Probe
Liveness probes determine whether a container is running properly. If a liveness probe fails, Kubernetes restarts the container automatically. This helps recover from situations where the application process is running but unable to make progress.
Attribute |
|
Type | Object defining probe configuration |
Example | See below |
Example:
containers:
- name: monitor
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 10
periodSeconds: 10
Configuration parameters:
- httpGet.path: The HTTP endpoint path to check (e.g.,
/healthz) - httpGet.port: The TCP port number for the HTTP request
- initialDelaySeconds: Wait time before the first probe after container startup
- periodSeconds: Interval between probe attempts
The default liveness probe configuration (if any) is typically sufficient. Only override this if you have specific health check requirements or if you're experiencing issues with premature container restarts.
Readiness Probe
Readiness probes determine whether a container is ready to accept traffic. Unlike liveness probes, failed readiness probes do not restart the container—they only remove it from service endpoints until it becomes ready again.
Attribute |
|
Type | Object defining probe configuration |
Example | See below |
Example:
containers:
- name: monitor
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 10
periodSeconds: 10
Configuration parameters:
- httpGet.path: The HTTP endpoint path to check (e.g.,
/readyz) - httpGet.port: The TCP port number for the HTTP request
- initialDelaySeconds: Wait time before the first probe after container startup
- periodSeconds: Interval between probe attempts
Readiness probes are particularly useful during rolling updates to ensure new pods are fully operational before receiving traffic and old pods are terminated.
Security Context
The security context defines privilege and access control settings for containers. The capabilities section allows you to drop specific Linux capabilities to reduce the attack surface and comply with security policies.
Attribute |
|
Type | Object defining security settings |
Example | See below |
Example:
containers:
- name: monitor
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SYS_CHROOT
Capability management:
- drop: Array of Linux capability names to remove from the container's default capability set
Common capabilities to drop:
KILL: Ability to send signals to processes outside the containerMKNOD: Ability to create special files using mknodSYS_CHROOT: Ability to use chroot to change root directoryNET_RAW: Ability to use RAW and PACKET socketsSETUID,SETGID: Ability to manipulate process UIDs/GIDs
Dropping capabilities may impact the functionality of the Flexera Kubernetes Inventory Agent. Test thoroughly in a non-production environment before applying security context restrictions. Consult with your security team to determine which capabilities can be safely dropped in your environment.
For a complete list of Linux capabilities, see the capabilities man page.
Complete Configuration Example
The following example demonstrates a comprehensive ConfigMap with all available customization options:
apiVersion: v1
kind: ConfigMap
metadata:
name: krm-controller-config
namespace: flexera-cloudcontainerscan-nonprd
data:
config.yaml: |
containers:
- name: monitor
labels:
app: krm-controller
role: monitor
environment: nonprd
cost-center: "12345"
resources:
limits:
cpu: "3"
memory: "4Gi"
ephemeral-storage: "8Gi"
requests:
cpu: "1"
memory: "1Gi"
ephemeral-storage: "8Gi"
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 10
periodSeconds: 10
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SYS_CHROOT
Applying the ConfigMap
To apply the controller configuration:
-
Create or edit a YAML file containing the ConfigMap definition (e.g.,
krm-controller-config.yaml). -
Apply the ConfigMap to your cluster using
kubectl:kubectl apply -f krm-controller-config.yaml -
The KRM controller automatically detects the ConfigMap and applies the configuration to newly created or updated resources.
Changes to the ConfigMap do not automatically update existing pods. The controller applies the configuration when it reconciles resources, which may occur:
- When you modify the KRM custom resource (
kind: KRM) - When the controller restarts
- During periodic reconciliation cycles
To force immediate application of ConfigMap changes, you can temporarily modify and revert a field in your KRM resource to trigger reconciliation.
Use kubectl describe to verify that the configuration has been applied to your pods:
kubectl describe statefulset <monitor-statefulset-name> -n <namespace>
kubectl describe daemonset <node-daemonset-name> -n <namespace>
Look for the labels, resource limits, probes, and security context in the pod template specification to confirm your changes are active.