Runtime Fabric – how to find a license’s capacity

Last Updated on 30/05/2021 by Patryk Bandurski

For the CloudHub deployment, we have an easy way to validate how many vCores we can assign to our applications. On the other hand, Runtime Fabric gives as an on-premise solution. How to validate capacity for this scenario? Recently I have described how to set up the environment on AWS. We will use the same environment for today’s case.

Runtime Manager subscription

On Anypoint Platform you can easily check how many vCores you have. You need to go to Access Management and then click Runtime Manager under the Subscription section.

Capacity for CloudHub on Anypoint Platform
Capacity for CloudHub on Anypoint Platform

On the screenshot above you can see how many vCores I have for test environment called Sandboxes. I have used just 0,1 vCore on my Sandbox and I have 0,9 more vCores to use.

Currently, we are not able to validate our Runtime Fabric capacity on Anypoint Platform portal. I will let you know when that happens. Now we need to do this in another way.

Runtime Fabric API

MuleSoft team has prepared an API to manage Runtime Fabric. It is available on Anypoint Exchange.

Extract from the API console of Runtime Fabric API
Extract from the API console of Runtime Fabric API

Obtaining Access Token

First, we need to obtain an access token in order to correctly login while invoking Runtime Fabric API calls. How to do this?

We may retrieve the access token by login in and providing Anypoint Platform user name and password. To do this we need to create a POST request to https://anypoint.mulesoft.com/accounts/login and in the body fill username and password properties. Just like on the screenshot below.

Retrieving access token by providing user name and password to Anypoint Platform
Retrieving access token by providing user name and password to Anypoint Platform

In the response, I got the access_token property that will be used for later calls. Don’t be surprised when the token expires 😉 its live time is limited.

List all the fabrics

Before we can check the capacity of our fabric we need to get fabric id. For this specific case, we will use the GET operation on /organizations/{orgid}/fabrics resource. In the code snippet below you can find how to perform call using curl.

curl -X GET \
  'https://anypoint.mulesoft.com/runtimefabric/api/organizations/7f52c5b0-4a2a-4d02-a5f9-de1d8732ffff/fabrics' \
  -H 'Authorization: bearer 1c105aea-8f8d-446b-8ac5-169be33047af'

Remember to attach the Authorization header with access token obtained in the previous step. The response will hold an array of all fabrics like below

[
    {
        "id": "5d91fed9-2465-47bb-907c-06c25629416f",
    }
]

Capacity of specif Runtime Fabric

Now we are ready to ask for our specific Runtime Fabric. In order to do this, we append the id of the fabric. Just like that:

curl -X GET \
  'https://anypoint.mulesoft.com/runtimefabric/api/organizations/7f52c5b0-4a2a-4d02-a5f9-de1d8732ffff/fabrics/5d91fed9-2465-47bb-907c-06c25629416f' \
  -H 'Authorization: bearer 1c105aea-8f8d-446b-8ac5-169be33047af'

The response can be divided into following parts:

  • Fabric details like name, region, and status.
  • Nodes array with details about each node including both controllers and workers
    • Node details like name, role
    • capacity – all CPUs, both allocated and free
    • allocatedRequestCapacity – how many CPUs we have already used
    • allocatedLimitCapacity – how many CPUs we still have to allocate

Here is a sample response for one of my environments:

 {
        "id": "5d91fed9-2465-47bb-907c-06c25629416f",
        "name": "test-fabric",
        "region": "us-east-1",
        "organizationId": "7f52c5b0-4a2a-4d02-a589-de1d8732153b",
        "version": "1.2.61",
        "status": "Disconnected",
        "availableUpgradeVersion": "1.3.1",
        "nodes": [
            {
                "uid": "815074d5-677d-11e9-b094-0243bc36c69c",
                "name": "172.31.0.10",
                "kubeletVersion": "v1.11.9",
                "dockerVersion": "docker://17.3.2",
                "role": "controller",
                "status": {
                    "isHealthy": true,
                    "isReady": true
                },
                "capacity": {
                    "cpu": 1,
                    "cpuMillis": 1700,
                    "memory": "7819Mi",
                    "memoryMi": 7819,
                    "pods": 110
                },
                "allocatedRequestCapacity": {
                    "cpu": 0,
                    "cpuMillis": 910,
                    "memory": "1090Mi",
                    "memoryMi": 1090,
                    "pods": 15
                },
                "allocatedLimitCapacity": {
                    "cpu": 3,
                    "cpuMillis": 3250,
                    "memory": "3270Mi",
                    "memoryMi": 3270,
                    "pods": 15
                }
            },
            {
                "uid": "841a504e-677e-11e9-b094-0243bc36c69c",
                "name": "172.31.0.26",
                "kubeletVersion": "v1.11.9",
                "dockerVersion": "docker://17.3.2",
                "role": "worker",
                "status": {
                    "isHealthy": true,
                    "isReady": true
                },
                "capacity": {
                    "cpu": 7,
                    "cpuMillis": 7700,
                    "memory": "61240Mi",
                    "memoryMi": 61240,
                    "pods": 110
                },
                "allocatedRequestCapacity": {
                    "cpu": 0,
                    "cpuMillis": 410,
                    "memory": "440Mi",
                    "memoryMi": 440,
                    "pods": 3
                },
                "allocatedLimitCapacity": {
                    "cpu": 0,
                    "cpuMillis": 600,
                    "memory": "870Mi",
                    "memoryMi": 870,
                    "pods": 3
                }
            },
            {
                "uid": "7d5f08f1-677e-11e9-b094-0243bc36c69c",
                "name": "172.31.0.11",
                "kubeletVersion": "v1.11.9",
                "dockerVersion": "docker://17.3.2",
                "role": "worker",
                "status": {
                    "isHealthy": true,
                    "isReady": true
                },
                "capacity": {
                    "cpu": 7,
                    "cpuMillis": 7700,
                    "memory": "61240Mi",
                    "memoryMi": 61240,
                    "pods": 110
                },
                "allocatedRequestCapacity": {
                    "cpu": 0,
                    "cpuMillis": 410,
                    "memory": "440Mi",
                    "memoryMi": 440,
                    "pods": 4
                },
                "allocatedLimitCapacity": {
                    "cpu": 0,
                    "cpuMillis": 600,
                    "memory": "870Mi",
                    "memoryMi": 870,
                    "pods": 4
                }
            }
        ],
        "secondsSinceHeartbeat": 410515,
        "clusterVersion": "1.0.6-acfa25a"
    }

My environment consists of one controller and two workers. The last worker has 7 CPUs to use but nothing is allocated. In other words, no single application is running on that worker.

Summary

To get information about current capacity for our Runtime Fabric environment is a straightforward task. We need to invoke a dedicated API. We have three properties that are interesting: capacity, allocatedRequestCapacity and allocatedLimitCapacity. Based on these three properties we can assess how many CPUs we have left for our new applications. In the next article, I will write about my tests regarding capacity and configuration. Stay tuned!

Runtime Fabric – how to find a license’s capacity

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top