Category: vmware

vRA Data Center on Demand with Terraform

Don’t you just hate it when you get that email statement for your monthly AWS bill and you realized you forgot to shutdown your latest experiment. Just like I have automated the start/stop of the homelab, I decided it was time to extend the functionality to AWS setup and vRA configuration. The project gave me an excuse to learn me some Terraform and give the vRA Terraform provider a try as well. This was my first time using Terraform and i may have been a bit ambitious for my first use case but I learned a lot.

Design goal is a fully automated AWS VPC spanning multiple availability zones, subnets, security groups, internet gateway, route tables. I also required the vRA constructs  automated including the Cloud Account, Cloud Zones, Network Profiles, Image and Flavor Mappings. I could have used AWS CloudFormation and vRA API, but Terraform provided the tool to manage both stacks from a common configuration file without having to resort to coding against the API’s myself.

First off is the AWS VPC design. Using Terraform makes this an easy iterative process. On my way to a three tier web app leveraging application load balancers with pubic/private subnets. But for now only have 2 tiers defined and both open to the internet for easy testing and validation. Forgive me for poor security practices, but I will be tightening up in future iterations of the environment.

AWS VPC Diagram

I spent extra time figuring out how to dynamically add/remove availability zones using a  variable map to define the target zones. The az variable then drives the subnet creation with automated names, ranges, routes and security group members. Lots of experimentation for a Terraform newbie to figure out loops, functions and data references.

Variable: subnet_numbers mapping

## Availablility zones for VPC
variable "subnet_numbers" {
  description ="Map availability zone to subnet numbers"
  default = {
    "us-east-1a" = 1
    "us-east-1b" = 2
#   "us-east-1c" = 3

The resource to dynamically create the web subnets using the availability zone mapping.

##  Web subnets
resource "aws_subnet" "web_subnets" {

  # iterate thru the availability zones to subnet number mapping
  for_each = var.subnet_numbers

  vpc_id     =

  # Automatically caluculate the subnet from the VPC cidr and mapping

  cidr_block = cidrsubnet(aws_vpc.webapp-vpc.cidr_block,8,each.value)

  availability_zone = each.key

  tags = {

    # used the subsring on naming to extract the last 
    # 2 chars of az for name spec (ie: web-1a)

    Name = join ("-",["web",substr(each.key,8,9)])


Subnets in the AWS Console

AWS Subnets

I used similar methods for route associations end security groups.

### Route tables and associations
resource "aws_route_table" "webapp-rt" {
  route {
  tags = {
    Name = "webapp-rt"

resource "aws_route_table_association" "web_routes" {
  # Iterate through the web subnets to add to route
  for_each = aws_subnet.web_subnets

Now that AWS is configured its time to start configuring vRA infrastructure constructs. First is to get the terraform-provider-vra installed and configured available on github.  The VMware blog “Getting started with the vRealize Automaton Terraform Provider” by Sam McGeown is a great resource where I started to figure this all out.

Setting up the cloud account was pretty straight forward, but I also needed to configure my cloud zones dynamically. You have to use specific data blocks to retrieve vRA construct identifiers to configure resources as well. The cloud zones require the vra_region to be retrieved from vRA.  Now it’s back to iterating the subnets/availability zone variable to create multiple cloud zones.

# Get the vra_region to create the Cloud Zones
data "vra_region" "lab" {
  cloud_account_id =
  region = var.region

# Configure a new Cloud Zone
resource "vra_zone" "aws" {
  for_each = var.subnet_numbers
  name = join(" ", ["AWS",each.key])
  # generate cloud zone description from the AZ
  description = join(" ", ["Cloud Zone configured by Terraform",each.key])
  region_id =

  tags {
    key = "zone"
    value = each.key

Flavor and Image mappings setup. I attempted to iterate through a variable mapping for the flavor name/type would only create the first one and then fail so went back to simple hard coding as these are really pretty static.

resource "vra_flavor_profile" "lab" {
  name = "terraform-flavor-profile"
  description = "Flavor profile created by Terraform"

  region_id =
  flavor_mapping {
    name =          "small"
    instance_type = "t3a.nano"
  flavor_mapping {
    name =          "medium"
    instance_type = "t3a.micro"
  flavor_mapping {
    name =          "large"
    instance_type = "t3a.small"

# Create a new image profile
resource "vra_image_profile" "lab" { 
  name = "terraform-aws-image-profile"
  description = "AWS image profile created by Terraform"

  region_id =
  image_mapping {
    name =      "docker"
    image_name = var.ami

Last are the network profiles which are being configured but these are a bit troublesome. I am having issues getting security groups configured properly on the profiles and on every Terraform run the profiles get updated even if no change is expected. I have not dug through the issues on the provider to determine if its a provider problem or just me.

To create the network profile the provider needs a data block to query the networks discovered by vRA for the cloud account. A data block is again using the subnets variable to scan and dynamically get the list of networks to add to hte profiles.

data "vra_fabric_network" "web" {
  #  Iterate the subnets and extract the Name from tags to filter
  filter = "name eq '${each.value.tags.Name}'"
  depends_on = [vra_cloud_account_aws.lab]

resource  "vra_network_profile" "web" {
  name = "aws-web"
  description = "AWS Web Tier Profile"
# Iterate loop to get the list of fabric ids to add to the profile

  fabric_network_ids = [for i in data.vra_fabric_network.web:]

# This is not working. ID is extracted and shows in the Terraform update
# but in vRA gui security group is not associated.


# Add constraints to the profile

  tags {

When I run “terraform apply” I get 21 configurations created but a few errors on the vRA network profiles. This is a timing issue in vRA where the cloud account has been added, but the networks had not been discovered yet. Still working on how to fix in the Terraform code, but simply running the apply again completes the configuration. Added a provisioner local-exec with a simple 10 second sleep and got passed this issue.

terraform apply

Plan: 21 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_vpc.webapp-vpc: Creating...
aws_vpc.webapp-vpc: Creation complete after 6s [id=vpc-04d6e6ca0ebd042e5]
aws_internet_gateway.webapp-igw: Creating...
aws_subnet.app_subnets["us-east-1a"]: Creating...
aws_subnet.app_subnets["us-east-1b"]: Creating...
aws_subnet.web_subnets["us-east-1b"]: Creating...
aws_subnet.web_subnets["us-east-1a"]: Creating...
aws_security_group.web-sg: Creating...
aws_subnet.web_subnets["us-east-1b"]: Creation complete after 2s [id=subnet-0b31ac3cb3815cbbc]
aws_subnet.app_subnets["us-east-1a"]: Creation complete after 2s [id=subnet-0b6354e438320e629]
aws_subnet.app_subnets["us-east-1b"]: Creation complete after 2s [id=subnet-0e7872f941509749b]
aws_security_group.db-sg: Creating...
aws_subnet.web_subnets["us-east-1a"]: Creation complete after 2s [id=subnet-02ff969e1f81f6dfa] Creating...
aws_internet_gateway.webapp-igw: Creation complete after 2s [id=igw-0d57829c0395c4191]
aws_route_table.webapp-rt: Creating...
aws_security_group.web-sg: Creation complete after 4s [id=sg-08c234ed2639b13bf]
aws_route_table.webapp-rt: Creation complete after 2s [id=rtb-053c6b50b33e41890]
aws_route_table_association.app_routes["us-east-1b"]: Creating...
aws_route_table_association.web_routes["us-east-1b"]: Creating...
aws_route_table_association.web_routes["us-east-1a"]: Creating...
aws_route_table_association.app_routes["us-east-1a"]: Creating...
aws_route_table_association.app_routes["us-east-1b"]: Creation complete after 1s [id=rtbassoc-0368ec5fd28dd93a0]
aws_route_table_association.web_routes["us-east-1b"]: Creation complete after 1s [id=rtbassoc-08ca998763a21231e]
aws_route_table_association.app_routes["us-east-1a"]: Creation complete after 1s [id=rtbassoc-0632efaaf242ba6fd]
aws_route_table_association.web_routes["us-east-1a"]: Creation complete after 1s [id=rtbassoc-06f346c929570754e]
vra_cloud_account_aws.lab: Creating... Creation complete after 4s [id=sg-0e35897dacc752515]
aws_security_group.db-sg: Creation complete after 4s [id=sg-0b293e47f78676b97]
vra_cloud_account_aws.lab: Provisioning with 'local-exec'...
vra_cloud_account_aws.lab (local-exec): Executing: ["/bin/sh" "-c" "sleep 10"]
vra_cloud_account_aws.lab: Still creating... [10s elapsed]
vra_cloud_account_aws.lab: Creation complete after 15s [id=e91c1114-c4ae-4eef-a049-915009a7b3ed]
data.vra_region.lab: Refreshing state...
data.vra_fabric_network.web["us-east-1b"]: Refreshing state...
data.vra_fabric_network.web["us-east-1a"]: Refreshing state...["us-east-1a"]: Refreshing state...["us-east-1b"]: Refreshing state...["us-east-1b"]: Creating...["us-east-1a"]: Creating...
vra_image_profile.lab: Creating...
vra_flavor_profile.lab: Creating... Creating...
vra_network_profile.web: Creating...
vra_flavor_profile.lab: Creation complete after 1s [id=deec4ef1-d486-488d-adff-f35d1fc749b2-8af76fe6-2161-4658-a1f0-2664d6b917c4]
vra_image_profile.lab: Creation complete after 1s [id=deec4ef1-d486-488d-adff-f35d1fc749b2-8af76fe6-2161-4658-a1f0-2664d6b917c4]["us-east-1b"]: Creation complete after 1s [id=08490e94-e9a3-4efb-b166-d8ef483e2538]["us-east-1a"]: Creation complete after 1s [id=342b16f9-2a58-487e-8494-6b5d6efc76ed] Creation complete after 0s [id=bcde0922-564f-41bb-9c63-6dd711213ad3]
vra_network_profile.web: Creation complete after 0s [id=bcd1a144-77a2-4c1d-a70f-0a76db4d34ad]

The network profile still requires a manual tweak to fix the security groups, but the rest of the configurations are good to go. This is a much better process to manage vRA as code and all configurations are tracked in git. I can change a couple of variables and deploy to a new AWS region or utilize from one to all availability zones in that region and have vRA configured to deploy my applications.

Going forward Terraform is going to be the tool of choice for managing my vRA configurations. At the end of a hard day in the lab I can run “terraform destroy” to delete everything to avoid those pesky credit card bills.

The Terraform files used in my experiment are available on github. Please let me know where I’ve done stupid things and how to make things better.

Network profile needing manual tweak to security groups. Gotta find a fix for this…

Broken SG

Configured Cloud Zones

Cloud Zones

Until next time…

Anytime you learn, you gain. -Bob Ross


A Virtual Machine by any other name would smell as sweet

If you ever want to start an interoffice hunger games struggle, suggest changing your corporate hostname standard. If you do succeed to move forward on a new naming standard, getting there by committee makes Brexit look easy. I started researching this topic expecting to show the limited capability of the vRA built-in custom naming and then diving directly into vRO event broker workflows to meet those pesky hostname requirements. I was surprised how far I could get using only native vRA constructs to meet a naming standard and saving the vRO implementation for another day.

Here is the naming standard I am using which lets me identify several meta data items from just the name:

< Environment >< OS Type > – < Project Code > – < Server usage ><###  Sequence>

  • Environment – first initial of environments ( devl/test/staging/prod)
  • OS Type – first initial of windows/linux
  • Project Code – unique 3 digit value added as custom property on each project
  • Server usage  – (a/d/w/k) for application, database, web, or container server
  • <### Sequence > – 3 digit number

For example – a development photon server running containers for project Tango which has a project code of CAS:


First look at the Custom Naming on the projects shows it is fairly limited. Name can be configured to use various properties from the project, resource, endpoint and generated sequence (it says random but it is an increasing number sequence).

Custom Naming
Custom Naming Project
Custom Naming Resource
Custom Naming Endpoint

The biggest constraint with the custom naming template is user input during provisioning cannot be directly configured on the tamplate and it is not practical for the example naming standard which includes environments, os and server types.

This is where I originally planned to abandon built-in and go custom vRO, but then I started looking blueprint functions capabilities to manipulate the resource name. I set custom name template simply to:


On the blueprint I added inputs allowing the user to select the for deployment choices and which are also required for the naming standard.

Request Form

I then leveraged the cloud assembly blueprint functions to build the resource name.  The property value combines inputs, project properties, and functions to build the resource name to meet the standard. Here is the expression set for the name property on the resource to meet the naming standard (full blueprint is provided at end of post for reference):

      name: '${input.environment}${substring(input.image,0,1) == "w" ? "w" : "l"}-${to_lower(}-${input.servertype}'

I did hit an interesting gotcha. The project custom property (projectCode) is automatically injected to all resources, but it appears to be an ordering issue if attempting to use “self.projectCode”.  I had to reference it from the network resource which is always configured before the VM resource.

As you can see from the deployments the servers are getting named according to standard based on the requests.

Project Lab:  Production, windows, database server

Project Pacific:  3 Staging, centos, container servers

Project Tango: 2 Development, centos, application servers


The VM’s are also registered in DNS with the generated name:


At this point I still do not have any validation against DNS, AD, or CMDB to check if the names are unique and the objects do not exist. Hoping this validation becomes part of the custom name ability in vRA in future releases. I would have to default back to vRO/ABX today.

Would I use this for a large environment? No. I need the validation. vRO could be used to integrate with a hostname service or even provide a hostname service via XaaS for manual builds. This method would require modifying of all blueprints if a standards changed and is error prone depending on the blueprinter as well. But this proved to be an interesting experiment in the lab and forced me to learn the functions and expressions available on the blueprint.

Blueprint with inputs and functions:

formatVersion: 1
    type: string
    title: Operating System
    description: The operating system version to use.
      - centos
      - photon
      - windows 2016
      - windows 2019
    default: centos
    type: string
    title: Size
    description: How big do you need it.
      - small
      - medium
      - large
    default: small
    type: string
    title: Environment
    description: Target Application Environment
      - title: Development
        const: d
      - title: Test
        const: t
      - title: Staging
        const: s
      - title: Production
        const: p
    default: d
    type: integer
    title: Count
    description: Number of VMs
    maximum: 8
    minimum: 1
    default: 1
    type: string
    title: Server Usage
    description: Server usage for VM
      - title: Webserver
        const: w
      - title: Appserver
        const: a
      - title: Database
        const: d
      - title: Container
        const: k
    type: Cloud.vSphere.Machine
        - 0
        - 0
      customizationSpec: '${substring(input.image,0,1) == "w" ? "windows" : "linux"}'
      image: '${input.image}'
      name: '${input.environment}${substring(input.image,0,1) == "w" ? "w" : "l"}-${to_lower(}-${input.servertype}'
      flavor: '${input.size}'
      count: '${input.count}'
        - network: '${}'
      attachedDisks: []
    type: Cloud.vSphere.Network
        - 1
        - 0
      networkType: existing


Until next time…

Anytime you learn, you gain. -Bob Ross

Let’s Start to Fill Our Toolbox

One of the best features of vRA is the API first* approach to management, but I need some tools to get there. Postman is great for learning and prototyping, but I need solid vRO Actions to build functional workflows to integrate with the Event Subscriptions and to automate the configuration of vRA itself.

Much of this will be repetitive to experienced vRO users, but hopefully helpful to others who are just starting. In future blogs the actions covered here will be used in examples and I wanted readers to be able to have a reference. And as I warned the readers in the welcome, the ramblings will go where anywhere that I find interesting.

The toolbox and examples are all designed with the intent to use the vRealize Automation Cloud API available on VMware {code} or you can access the swagger API within the vRA deployed appliance.

https:// [vRA Appliance] /automation-ui/api-docs/


vRO has the built-in REST plugin which allows you to add REST Hosts, REST Operations, and generate operation workflows. Don’t use it! Ok, let me walk that back a bit. For certain use cases such as Puppet DB querying adding the trust keys to the vRO keystore and then setting up the DB rest host using the keys works well. Configuring the REST plugin with basic user/password authentication becomes a lot of maintenance later and plugin configurations cannot be migrated if you run multiple vRA environments. I really wish someone would have told me 5 years ago, but we live and learn.

Bypassing the REST plugin I use 3 base actions as my starting point for REST API integration. These work well for me and I take no credit for writing them. I used VMware {code} and vCommunity posts to find code examples to cobble these together. The driving design is to keep them simple, easy to use, and then build additional specialized actions extending the functionality of the core functionality which can be an slippery slope of action overload. Package of all actions shown in this post is available here.



importCert: does exactly what the name implies.  The code was borrowed from the vRO Library workflow “Import a Certificate from URL” and simplified down to this action. Since running in my isolated lab environment I don’t check  most standard certificate validation and will only throw an exception if the certificate is expired.

var ld = Config.getKeystores().getImportCAFromUrlAction();
var model = ld.getModel();


var model = ld.getModel();
model.value = url;

var certValidation = ld.validateCertificates();
var certInfo = ld.getCertInfo();


if ( certValidation.isCertificateExpired() == true )   throw "Certificate is expired. \n " + certinfo;
var error = ld.execute();
if (error != null) throw error;

createTransientRestHost: This action allows me to bypass the REST Plugin. As the name suggests it creates a temporary and transient REST Host for the duration of the workflow and it is automatically destroyed. Input is the FQDN of the rest host and uses the importCert to add the certificate to vRO.

if (fqdn == null || fqdn == "" ) return null;

var url = "https://" + fqdn;


var restHost = RESTHostManager.createHost("TransientRESTHost-"+fqdn);
var transientRestHost = RESTHostManager.createTransientHostFrom(restHost);

return transientRestHost;

request: This is the generic action for virtually any REST request and returns the REST Response object. I debated adding additional response error handling in the action, but opted leave out in this action. The responsibility for all error handling rests (pun intended) in the workflow/action using this low level action.  Several inputs have been configured to defaults used for the most common requests as well.

if (fqdn == null || fqdn == "" ) return null;
if (url == null || url == "" ) return null;
if (method == null || method == "" ) var method="GET";
if (contentType == null || contentType == "") var contentType="application/json";
if (content == null) content="";
if (headers == null) {
	var headers = new Properties();

var restHost = System.getModule("").createTransientRESTHost(fqdn);
var request = restHost.createRequest(method,url,content);

for each (var header in headers.keys) {
	System.debug("Headers: "+header+":"+headers[header]);

var response=request.execute();
System.debug("Response Code: "+  response.statusCode);
return response;

Now let’s use these actions to get the vRA Authentication Bearer Token used for subsequent API requests. This is the first of many specialized actions to extend the functionality of the core REST actions. I also wanted to make my vRA interactions extremely simple so I externalized the vRA url, userid, and password into configuration attributes.  The action to get the bearer token requires no inputs and the return properties object is the headers required for further vRA requests.

var username=System.getModule("ar.util.helpers").getLabConfig("vra_userid");
var password=System.getModule("ar.util.helpers").getLabConfig("vra_password");
var fqdn=System.getModule("ar.util.helpers").getLabConfig("vra_fqdn");

var url="/csp/gateway/am/api/login?access_token";
var method="POST";
var content = {
	"username": username,
	"password": password

var response = System.getModule("").request(fqdn,url,method,JSON.stringify(content),null,null);
var responseJSON=JSON.parse(response.contentAsString);
var headers = new Properties();
headers.put("Authorization","Bearer "+responseJSON.access_token);
return headers;

The next must have action is an action to make any REST API call to vRA using all the building blocks so far.

if (url == null || url == "" ) return null;
if (method == null || method == "" ) var method="GET";
if (content == null ) var content="";

var headers=System.getModule("").getAccessTokenHeaders();
var fqdn=System.getModule("ar.util.helpers").getLabConfig("vra_fqdn");
return System.getModule("").request(fqdn,url,method,content,headers,null);

At this point I have a single action one-liner in any scriptable task to interact with vRA.

var response=System.getModule("").genericRestAPI("/iaas/api/cloud-accounts","GET",null);

Time to jump on that slippery slope that I mentioned above for a ride. There are certain vRA API calls I use many times over. Building further specialized actions for specific API calls can be very useful, but it can also result in action overload which is where I tend to end up. A useful getDeployments action will retrieve one or all deployments for based on the one input of variable deploymentId.

var  url = "/deployment/api/deployments";
if (deploymentId != null)  {
System.debug("getDeployments url: "+url);
return System.getModule("").genericRestAPI(url,null,null);

The toolbox will be gaining many more tools in the future, but now I have a starting point to get deeper into the Event Broker Service subscriptions and start to configure environments during provisioning – stay tuned. Package of all actions from this post is available here.

*Maybe should say “almost” to an API first approach to management since the current public API does not cover 100% of product configurations, but I hope this will be fixed in upcoming releases of vRA.

Anytime you learn, you gain.             -Bob Ross

Home Lab for the Ramblings

There are many blogs and resources available providing very good help on setting up a home lab. This post is not one of those. There are also some amazing home labs out there. Check out the #homelabking and yes – @MarcHubbert is the King. This is not about one of those awesome home labs either. I’m just sharing my home lab setup I built on the cheap to support my vRA addiction and just a bit on how to automate it.

Home Lab

The lab details.

  • Intel NUC8i5BEH, 64GB Ram, 1TB Samsung NVMe SSD
  • iMac (27-inch, Late 2009) 2.8 GHz Core i7, 12 GB Ram
  • Two Raspberry PI 3b+
  • Ubiquiti EdgeRouter X
  • Ubiquiti Access Point
  • Artillery Sidewinder X1 3d printer
  • AWS Account

That’s it. I can do everything I want to do with this setup. I will be expanding in the future to dive into the VMware PKS stack, but for now it suits my needs.

The lab consists of a single Intel NUC running ESXI 6.7u3 booting from a USB drive. The internal 1TB SSD is all used as a single datastore. It is running as simple as possible of a ESXI configuration with a single network, no DRS, no vMotion, no vSAN, etc. This host supports vRA, vRO, vIDM, vRLCM, and a Bitnami gitlab server. It is also utilized for on-premise workload deployments from vRA. AWS is my primary compute datacenter for vRA deployments as it is pretty easy to melt the NUC running the vRealize stack.

ESXI Usage

The brain of the lab is a 10 year old iMac running High Sierra which is also my primary home computer. I would so like to upgrade but this thing just keeps on running. The iMac is running a vCenter Server Appliance as a VM under VMware Fusion 8.5. It is also used for running vRO client, PowerShell scripts, and writing this blog post.

First Raspberry Pi runs Pi-hole network add blocking, Unifi networking controller software, and a lab only Citadel mail server. Second Pi is dedicated to OctoPrint software controlling an Artillery Sidewinder X1 3d printer. This Pi and printer are not really core pieces of the home lab, but I do plan to experiment what the OctoPrint API and see if I can deliver 3DPaaS (3D Printing as a Service) from the vRA catalog.

I am severely internet challenged and wireless LTE is the only service currently available. Running a Ubiquiti EdgeRouter X connected to the wireless modem and a Ubiquiti Access point for wifi provides single private /24 network for the home lab and house.

Now for some Automation..

The networking equipment and the Pi-hole are the only lab infrastructure running 24×7. I only start up the lab when I am actively working on a project. Since I am an automation guy… Of course i had to automate the start/stop and it gave me a good excuse to dive into some PowerShell scripting. Running vCenter external to my ESXI host really makes this easy to do with PowerCLI. I only suspend all of my lab VM’s at shutdown since waiting 10+ minutes for vRA 8 to start up gets old quickly.

The procedure to start lab is run PowerUpLab.ps1 script and hit the power button on the NUC. (Hope to remove the hit power on NUC step but wake-on-lan is not working). Lab startup takes about 3 minutes from zero to login into vRA. The majority of time is waiting for Fusion to resume the vCenter appliance. I really need a new Mac. The procedure to shutdown the lab is to run PowerDownLab.ps1 script which will cleanly shutdown the lab in about a minute.

PowerUpLab.ps1 script:

  • Launch VMware Fusion
  • Resume VCSA VM
  • Take ESXI host out of maintenance mode
  • Resume vRA, VIDM, and gitlab VM’s

PowerDownLab.ps1 script:

  • Suspend all VM’s on the ESXI host
  • Put ESXI host in maintenance mode
  • Power off the ESXI host
  • Suspend VCSA VM in Fusion
  • Shut down Fusion

This is my current home lab and automation controlling it which is continuously evolving. Here are a few home lab resources I have found very helpful to get his running.

I really need a good Bob Ross style sign-off message for every blog post… I’ll keep working on that lab addition.

How about we peek under the covers of the vRA 8 Event Broker for a bit.

Those of you familiar with vRealize Automation (vRA) 7.x may have had the pleasure (or head ache) of working with the Event Broker Subscriptions (EBS). The good news is the EBS in vRA 8 is much simpler to add individual events, can run good old vRO workflows or new ABX actions, and has added a recovery runnable item.  It has lost the publish/draft capability allowing you to disable an event without deleting and the ability to subscribe to all events in a category with one configuration.  At this time the EBS is not available as a public API, but possible by reverse engineering the UI with your browser. Hopefully the API will be made officially public soon. I’m really tired of having to recreate all of these subscriptions every time I deploy a new vRA appliance.


What is really important to know is what events are available, when do they fire, and what property payloads are available to the workflows. To accomplish this I have subscribed all the events that run during provisioning to a single workflow – EBS Events.

Event Subscriptions

I setup a basic blueprint to deploy into a vCenter to trigger the events.  The blueprint is nothing fancy.  It contains 4 photon VMs, attached to a pre-existing network, and one disk attached.


Here is the EBS Event workflow.  As you can see from the view of the workflow runs, all we know is a lot of event have fired. Not very useful until you drill into each of the execution logs to see what is really going on. 

EBS Events

This is where things start to get exciting if you are into this kind of thing like I am. The EBS Events workflow pulls information from the system context metadata, sets the __tokenName, and passes it to a nested workflow, __token EBS Events. This makes it much easier to see what order the EBS subscriptions fire and how many times for various different resources. 

__Token EBS Events

Now this view actually gives you some very useful information just from looking at the the workflow runs. You can see all the events that fired, the order, per deployment or resource, and who requested the blueprint deployment. The user bit of who is very handy for a production system with many requests to weed through for trouble shooting.  Just watching the vRO runs during the deployment gives you a good idea on the progression of a deployment as well. Similarly the events for the destroy of the deployment. 

Destroy Events

Now let’s dive into some individual event logs to see what vRA passes to the workflows. The Disk Allocation Pre event runs early in the provisioning and you can start get the feel for the data that is available to vRO either as the inputProperties or in the _metadata context. The disk allocation only runs once corresponding to the single disk added in the blueprint, but 4 disks are allocated in vSphere cooresponding to the 4 VMs deployed.


The Network Configure runs once for the to the single network on the blueprint — I see a pattern forming. The properties during networking get a bit more interesting. The event contains all the information for the network configuration of the 4 VMs from the deployment such as custom properties, network profiles ids, and network subnet selection ids. I’ve been playing in this event quite a bit to understand this schema of multi-level arrays for selection and to see what can be modified, but that is for a future post. Hopefully the anticipation and sneak peak of whats to come that will keep you coming back for more.


One of the more straightforward payloads is for the Deployment Resource Request Pre. There is a bundle of information available to drive customized workflows and this event fires for every resource on the deployment.


Hopefully this gives a little bit of understanding to what  events are available and what data can be used for customization workflows during vRA deployment.  I will be doing future deep dives into many of these events topics to see just what can be modified.

Here is a vRO package with the EBS Events workflows so you can start to explore EBS events and payload properties in your environment.

vRO 8 Web Interface and Legacy Swing Usage

vRealize Orchestrator 8.0 comes with a new web only interface and other changes you can view in the official release notes. The new interface is the first release by VMware to modernize vRO and abandon the legacy Java Swing technology. The web interface has a some nice features such as limited native git integration, operations dashboards, and the all new HTML editor interface.

One of the most annoying “features” of the web interface is you can no longer use folders to manage workflows. Workflows are grouped by tags. Don’t get me wrong, I like tags. But coming from a world where workflow sorting/grouping are directory based (including all of the built-in vRO 8 Library), transitioning to one big flat folder for all new custom workflows just doesn’t fit how I currently work. Existing folder infrastructure is represented as tags and all newly created workflow get a default “webroot”tag.

Now I have never been a huge fan of the JAVA Swing client — but it is still better than the new web interface. Ok maybe not better and I am probably just “Get off my lawn!!!!” old and don’t like change. But lucky for me the vRO 7.6 Swing client is still mostly functional with vRO 8 server.

You need to deploy a 7.6 vRO appliance to get the client running. I installed the 7.6 vRO Workflow Designer locally and nuked the 7.6 appliance. In the login enter the URL of your vRO or vRA 8.0 appliance. The client will show a server mismatch version, but just ignore this and plow forward and the client will launch.

The other big gotcha is if you edit a workflow via the web interface it can no longer be edited, deleted, moved, or copied from the swing client. You can view it and look at the logs. Also any workflow created in with the web interface is dumped into one folder “web-root” which will become a big pile of what does this workflow do quickly without a good tagging/naming scheme.

Not quite all rainbows and unicorns unfortunately. Packages will not import and you must go back to web interface for import.

I’m guessing the Swing client will cease to function in future 8.x or 9 release. Hopefully by then the vRO product team extends the functionality of the web interface to fill the bring back the folder structure as part of the HTML interface. Hint for the Product Manager on the VMware Orchestrator team if they are reading this.

And I must have a standard disclaimer: Use the SWING client at your own risk and I’m pretty sure it is not endorsed or supported by VMware. I’m going to keep using it and will post any updates to enhancements to the HTML interface… Or when SWING is officially broken.