VMware Cloud Director OIDC Integration with VMware Workspace ONE Access

Featured

VMware Cloud Director OIDC Integration with VMware Workspace ONE Access

Prerequisite

  • VMware Workspace access ONE must be already deployed.
  • VMware workspace access ONE must be configured with a directory service source for users and groups.
  • Cloud Director must be installed and configured for provider and tenant organizations.

Bill of Material

  • VMware Cloud Director 10.5.1
  • VMware Workspace ONE Access 23.09.00

Steps to Configure Workspace ONE Access for OIDC Authentication

Workspace ONE Access uses OAuth 2 to enable applications to register with Workspace ONE Access and create secure delegated access to applications. In this case, we will use Cloud Director to integrate with Workspace One Access.

  • In the Workspace ONE Access console Settings > OAuth 2.0 Management page, click ADD CLIENT.
  • In the Add Client page, configure the following.
  • Click SAVE. The client page is refreshed and the Client ID and the hidden Shared Secret are displayed.
  • Copy and save the client ID and generated shared secret.
  • Note: If the shared secret is not saved or you lose the secret code, you must generate a new secret, and update in Cloud Director that uses the same shared secret with the regenerated secret. To regenerate a secret, click the client ID that requires a new secret from the OAuth 2.0 Management page and click REGENERATE SECRET.

Steps to configure VMware Cloud Director to use Workspace ONE Access for Provider/Tenant users and groups

  • From the top navigation bar, select Administration.
  • In the left panel, under Identity Providers, click OIDC or directly you can browse: https:// [VCD Endpoint]/(provider or tenant/[orgname])/administration/identity-providers/oidcSettings
  • If you are configuring OIDC for the first time, copy the client configuration redirect URI and use it to create a client application registration with an identity provider that complies with the OpenID Connect standard, for example, VMware Workspace ONE Access. (this has already been done above)
  • Click Configure
  • Verify that OpenID Connect is active and fill in the Client ID and Client Secret you created in VMware Workspace ONE Access as above during client creation.
  • To use the information from a well-known endpoint to automatically fill in the configuration information, turn on the Configuration Discovery toggle and enter a URL at the site of the provider that VMware Cloud Director can use to send authentication requests to. Fill in the IDP Well-known Configuration Endpoint field with the value:               https://ws01 URL/SAAS/auth/.well-known/openid-configuration
  • Click next.
  • If everything is correctly configured, the below information will automatically get populated, keep a note we are using the User Info endpoint.
  • VMware Cloud Director uses the scopes to authorize access to user details. When a client requests an access token, the scopes define the permissions that this token has to access user information, enter the scope information, and click Next.
  • Since we are using User Info as an access type, map the claims as below and click Next.

NOTE: At the claims mapping step, the Subject theme will be default populated with “sub” which will mean that VCD users will have the username format “[username]@XXX”. If you want to import the users to VCD with a different format, you can change the Subject theme to map to “email” and then import users to VCD using the email address attached to the account. 

This is the most critical piece of configuration. Mapping this information is essential for VCD to interpret the token/user information correctly during the login process.

Login as an OIDC GROUP Member User

  1. In the Provider/Tenant organization’s Administration Page, import OIDC groups and map them to existing VCD roles.
  2. NOTE: In case you don’t see the “IMPORT GROUPS” button, refresh the page, and you will see the desired button IMPORT GROUPS
  • User go to https:// [VCD Endpoint]/(provider or tenant/[orgname])
  • The user should be redirected to the Workspace ONE Access login page. Users can log in with the user in the group.
  • The user will be redirected back to VCD and should now be fully logged in. 

After the first successful login, the organization administrator can see the newly auto-imported user.

Login as an OIDC User

  • In the Provider/Tenant organization’s Administration Page, import OIDC users and map them to existing VCD roles.
  • User go to https://%5BVCD Endpoint%5D/(provider or tenant/[orgname])
  • The user should be redirected to the Workspace ONE Access login page and log in there.
  • The user will be redirected back to VCD and should now be fully logged in. 

If you get the SSO Failure page double-check that you imported to the correct group/user and that the username format is correct. For additional information, you can check Here and for troubleshooting and about configuring additional logging, you can check the official documentation here.

Login without OIDC or as a Local User

In version 10.5, if an organization in VMware Cloud Director has SAML or OIDC configured, the UI displays only the Sign in with Single Sign-On  option. To log in as a local user, navigate to https://vcloud.example.com/tenant/tenant_name/login or https://vcloud.example.com/provider/login.

NSX Multi-Tenancy in VMware Cloud Director

Featured

Multi-Tenancy was introduced in NSX UI starting from VMware NSX 4.1 and now commencing with version 10.5.1, VMware Cloud Director introduces support for NSX multi-tenancy, facilitating direct alignment of vcd organizations with NSX projects.

What are NSX Projects ?

A project in NSX functions akin to a tenant. Creating projects enables the separation of security and networking configurations among different tenants within a single NSX setup.

Multi-tenancy in NSX is achieved by creating NSX projects, where each project represents a logical container of network and security resources (a tenant). Each project can have its set of users, assigned privileges, and quotas. Multi-tenancy serves various purposes, such as providing Networking as a Service, Firewall as a Service, and more.

How NSX Projects relate to Cloud Director Organizations?

Within the VCD platform, the tenancy is established via Organizations. Each tenant receives its exclusive organization, ensuring a distinct and isolated virtual infrastructure tailored to their tasks. This organizational setup grants precise control over tenant access to resources, empowering them to oversee Users, Virtual Data Centers (VDCs), Catalogs, Policies, and other essentials within their domain.

To clearly outline the tenant structure, VMware NSX introduced a feature known as Projects. These Projects allocate NSX users to distinct environments housing their specific objects, configurations, and monitoring mechanisms based on alarms and logs.

With VCD 10.5.1, management functionalities tied to NSX Tenancy fall within the exclusive purview of the Provider. NSX Tenancy operates on an Organization-specific level within VCD. When activated, a VCD Organization aligns directly with an NSX Project.

VCD drives and manages the creation of the associated NSX project, allowing the User to configure the project identifier. The NSX project is actually created during the creation of the first VDC in the organization for which you activated NSX tenancy. The name of the NSX project is the same as the name of the organization to which it is mapped.

How to enable?

The Cloud Provider can enable the NSX Tenancy for a specific Organization by going into the Cloud Director Organization section, choosing an organization, and selecting “NSX Tenancy”, he/she can also define a Log Name, which will be the Organization’s unique identifier in the backing NSX Manager logs.

The name of the NSX project will be the same as the name of the organization to which it is mapped.

Once NSX tenancy has been activated on the Org level, the Cloud provider can create a new Org VDC and choose to enable “NSX Tenancy”, this is when The NSX project is actually get created in NSX.

NOTE: Network Pool selection is disabled. This is because NSX supports Project creation only in the default overlay Transport Zone. Also, make sure the default overlay Transport zone already exists.

Note: If you choose not to activate NSX tenancy during the creation of an organization VDC, you cannot change this setting later.

When not to choose to enable tenancy?

Some use cases do not require organization VDC participation in NSX tenancy, for example, if the VDC only needs VLAN networks. Additionally, organization VDCs using NSX tenancy are restricted to using the network pool that is backed by the default overlay transport zone, so, in order to be able to use a different network pool, you might wish to opt out of NSX tenancy.

also there are a few features that NSX projects do not support today, like NSX Federation deployments as well as not all Edge Gateway features are available for Networking Tenancy-enabled VDCs like VPNs (IPsec/L2) and sharing segment profile templates, etc.. so work in progress and will see more and more features coming in future.

Conclusion

Aligning NSX Projects with VCD’s Tenancy ensures customers access an extensive array of networking capabilities offered by the NSX Multi-tenancy solution. Among these crucial functionalities is tenant-centric logging for core VCD networking services like Edge Services and Distributed firewalls. Additionally, integrating NSX Projects paves the way to investigate potential enhancements, facilitating tenant self-service login capabilities within VCD features. Below, you can find more information and capabilities.

Managing NSX Tenancy in VMware Cloud Director

VMware Cloud Director 10.5.1 adopts NSX Projects

Deploy, Run, and Manage Any Application with VMware Cloud Director Content Hub

Featured

In today’s fast-paced digital landscape, businesses require agile and scalable solutions to deploy and manage their applications efficiently. VMware Cloud Director Content Hub (VDCH) introduced in VMware Cloud Director 10.5, offers a robust platform for a simplified and automated way to deploy, run, and manage a wide range of applications. In this blog, we’ll explore the key features and benefits of VMware Cloud Director Content Hub and how it streamlines the application deployment process.

What is Content Hub ?

Content Hub, introduced in VMware Cloud Director 10.5, serves as a convenient tool that unifies the interface for accessing both VM and container-based content. This feature seamlessly integrates external content sources, such as VMware Marketplace and Helmchart Repositories, into the VMware Cloud Director environment.

By incorporating external sources, content can be seamlessly brought into the catalogs of VMware Cloud Director, ensuring its immediate availability for users. With the integration of Content Hub, VMware Cloud Director introduces an interface oriented towards applications, enabling users to effortlessly visualize and retrieve catalog content, thus elevating the overall content management experience.

As a result of these enhancements, catalog items are presented as “Application Images,” highlighting crucial application information like the application’s name, version, logo, screenshots, and other pertinent details necessary for the consumption of the application

Integration with VMware Marketplace

To integrate Marketplace with VMware Cloud Director, a Cloud Provider needs to create a Marketplace Connection and then share it with one or more tenant organisations. Here is the procedure:

  • Cloud Provider must be logged in as a system administrator.
  • Cloud Provider must have access to VMware Marketplace and there generated an API token. (see Generate API Tokens in the VMware Cloud Services Product Documentation)
  • From the left panel, select VMware Marketplace & Click New.
  • The New VMware Marketplace dialog opens, and there enter the following values:

Integration with Helm Chart Repository

A Helm chart repository is a named resource that stores all the information required to establish a connection with a Helm chart repository, allows users to browse contents from a remote repository, and imports applications from the repository.

Cloud Providers and tenants both can create Helm chart repository content connections. Tenants need special RIGHTs that providers can assign to specific tenants based on requirements.

  • From the left panel, select Helm Chart Repository & Click New.
  • The New Helm Chart Repository dialog opens, and there enter the following values:

  • Password-protected Helm chart repositories are also supported. 
  • You can create multiple Helm chart repository resources in VMware Cloud Director.

Share a VMware Marketplace/Helm Chart Repository Resource with tenants

As a service provider administrator, you can share the configured VMware Marketplace resources with other tenant organizations.

  • Inside VMware Marketplace/Helm Chat Repository, on the left of the name of the connection that you want to share, click the vertical ellipsis and select Share.
  • Share the resource.
  • Select the tenant organizations you want to share the resource with.
  • You can set the individual access level for the respective tenant organization only as Read Only.
  • Click Save.

NOTE: Since vCD does a DB caching of the helm chart repository items in vCD DB, you can click on Sync which will ensure the latest from the remote repository is cached in vCD DB.

Roles Required

A new set of User Roles has been introduced to facilitate users in accessing and managing the Content Hub. Assigning the appropriate user roles to individuals within the organization to grant them access and utilize the Content Hub feature effectively is crucial.

Right Bundles: A Rights Bundle is a collection of rights for the tenant organization and A Rights Bundle defines what a tenant organization has access to. Rights Bundles are always defined globally, and they can be applied to zero or more tenants.

Roles: A Role is a collection of rights for a user. A Role defines what an individual user has access to. Roles can either be tenant-specific–defined within a single organization and only visible to that organization, or global–defined globally and applied to zero or more tenants. 

Putting all the pieces together looks something like this:

Following are the new RIGHTs introduced to manage CatalogContentSource entities:

Apply the Above Rights for organizations as well as the user, so that they have enough rights to add and deploy applications.

Content Hub Operator for Kubernetes Container Service Extention

To perform this operation, First, you need full control of the Kubernetes cluster, where you are deploying the container applications, and additionally the following VMware Cloud Director rights:

You must install a Kubernetes operator to allow tenant users to deploy container applications from external content sources. From the top navigation bar, select Content Hub, from the left panel, select Kubernetes Operator, and on the Kubernetes Operator page, select the Kubernetes cluster on which you want to install the Kubernetes operator, and click Install Operator.

The Install operator on cluster-name dialog opens, on this page Configure the source location of the Kubernetes operator package.

Select the type of source location for the Kubernetes operator package. VMware Registry location with anonymous access is selected by default.

(Optional) To configure a custom registry, first, you must clone the Content Hub Kubernetes operator package from the VMware container registry to your custom registry. The Content Hub Kubernetes operator package is in Carvel format and you must use the Carvel imgpkg tool for cloning the package.

Click Install Operator.

After a Few minutes it shows “Not Reachable”

And then finally it turns to “ready”, basically After the installation of the operator completes successfully, the system creates two namespaces under the Kubernetes cluster. In the first namespace, “vcd-contenthub-system”, the Content Hub Operator manager is installed. The second namespace, “vcd-contenthub-workloads”, is empty. This namespace is used to deploy container applications at a later stage.

Catalog Versioning

In the previous version of VMware Cloud Director, there was no concept of versioning in catalogs. For instance, when a user imports multiple versions of the same application from external sources into the catalog, each version of the VM or container will be stored and represented (listed) as an individual resource, but from VMware Cloud Director 10.5 onwards, a Virtual Machine application or Container application that was imported with multiple versions either at the same time or at different intervals, Content Hub is capable of managing and structure the versioning of that resource.

Let’s Deploy a Container Application

Upon launching an application image that has multiple versions, you will be prompted to choose the specific version of the image you wish to deploy.

If the Container application is launched, then the Container Application launcher window will open and then you need to provide the application Name, select the version to deploy and select the TKG Cluster.

(Optional) To customize the application parameters, click Show Advanced Settings.

Click Launch Application.

The system deploys the container application. After the deploy operation completes, on the card for the application, the status appears as Deployed. VMware Cloud Director deploys the container application as a Helm release under the vcd-contenthub-workloads namespace to the Kubernetes cluster.

Either clicking on Application Name or DETAILS, it takes user to details of the application, like Status, Pods Status, Access URLs etc..

User can use the access URL details to access the application easily

Deploy an Stateful Container Application

In this section we will deploy Casandra database using Cloud Director Content Hub, process is simple as we followed in above section, once deployed the application, check the details, here details are different like no access URL, it also created required Persistent Volumes automatically

it also give visibility to Stateful Set and its status, secrets if any, services and its type as well as other resources. this allow end user to not to check in K8s clusters vs this info is available in vCD GUI.

Deploy an Virtual Machine Application

A vApp consists of one or more virtual machines that communicate over a network and use resources and services in a deployed environment. A vApp can contain multiple virtual machines.If you have access to a catalog, you can use the vApp templates in the catalog to create vApps.

A vApp template can be based on an OVF file with properties for customizing the virtual machines of the vApp. The vApp inherits these properties. If any of those properties are user-configurable, you can specify their values.

Enter a name and, optionally, a description for the vApp.

Enter a runtime lease and a storage lease for the vApp, and click Next.

From the Storage Policy drop-down menu, select a storage policy for each of the virtual machines in the vApp, and click Next.

If the placement policies and the sizing policies for the virtual machines in the vApp are configurable, select a policy for each virtual machine from the drop-down menu.

If the hardware properties of the virtual machines in the vApp are configurable, customize the size of the virtual machine hard disks and click Next.

If the networking properties of the virtual machines in the vApp are configurable, customize them and click Next.

On the Configure Networking page, select the networks to which you want each virtual machine to connect.

(Optional) Select the check box to switch to the advanced networking workflow and configure additional network settings for the virtual machines in the vApp.

Review the vApp settings and click Finish.

You can see under Applications -> Virtual Applications, VM is getting deployed.

With Content Hub feature provide can offers centralized content management for application images, vApp and VM templates, and media. VMware Cloud Director now has full control over the deployed and listed images, storing all pertinent information in its database. Unlike the application images imported from App Launchpad, where VMware Cloud Director only listed the imported images and App Launchpad stored all the details and information in its own database as well as It offers ease of use since you won’t have to set up and configure App Launchpad, which would otherwise be an additional task.

Infrastructure as Code with VMware Cloud Director

Featured

In today’s fast-paced digital landscape, organizations are constantly seeking ways to optimize their IT operations and streamline infrastructure management. One approach that has gained significant traction is Infrastructure as Code (IaC). By treating infrastructure provisioning and management as code, IaC enables organizations to automate and standardize their processes, resulting in increased efficiency, scalability, and consistency.

In this blog article, we will explore the concept of Infrastructure as Code and its practical implementation using VMware Cloud Director. We will delve into the benefits and challenges of adopting IaC, highlight the features of VMware Cloud Director, and showcase how the integration of Terraform with VMware Cloud Director can revolutionize IT operations for Cloud Providers as well as customers.

What is Infrastructure as Code?

Infrastructure as Code (IaC) is a methodology that involves defining and managing infrastructure resources programmatically using code. It allows organizations to provision, configure, and manage their infrastructure resources, such as compute, network, and storage, through automated processes. By treating infrastructure as code, organizations can leverage the same software development practices, such as version control and continuous integration, to manage their infrastructure.

Benefits of Infrastructure as Code

The adoption of Infrastructure as Code brings numerous benefits to cloud providers as well as customers:

Consistency and repeatability: With IaC, infrastructure deployments become consistent and repeatable, ensuring that the same configuration is applied across different environments. This eliminates configuration drift and minimizes the risk of errors caused by manual configurations.

Efficiency and scalability: IaC enables organizations to automate their infrastructure provisioning and management processes, reducing the time and effort required for manual tasks. This allows IT teams to focus on more strategic initiatives and scale their infrastructure easily as the business demands.

Version control and collaboration: By using code repositories and version control systems, organizations can track changes made to their infrastructure configurations, roll back to previous versions if needed, and collaborate effectively across teams.

Documentation and auditability: IaC provides a clear and documented view of the infrastructure configuration, making it easier to understand the current state and history of the infrastructure. This improves auditability and compliance with regulatory requirements.

Flexibility and portability: With IaC, organizations can define their infrastructure configurations in a platform-agnostic manner, making it easier to switch between different cloud providers or on-premises environments. This provides flexibility and avoids vendor lock-in.

Challenges of Infrastructure as Code

While Infrastructure as Code offers numerous advantages, it also presents some challenges that organizations should be aware of:

Learning curve: Adopting IaC requires IT teams to acquire new skills and learn programming languages or specific tools. This initial learning curve may slow down the adoption process and require investment in training and upskilling.

Code complexity: Infrastructure code can become complex, especially for large-scale deployments. Managing and troubleshooting complex codebases can be challenging and may require additional expertise or code review processes.

Continuous maintenance: Infrastructure code needs to be continuously maintained and updated to keep up with evolving business requirements, security patches, and technology advancements. This requires ongoing investment in code management and testing processes.

Infrastructure as Code Tools

There are various tools available in the market to implement Infrastructure as Code. Some of the popular ones include:

Terraform: Terraform is an open-source tool that allows you to define and provision infrastructure resources using declarative configuration files. It supports a wide range of cloud providers, including VMware Cloud Director, and provides a consistent workflow for infrastructure management.

Chef: Chef is a powerful automation platform that enables you to define and manage infrastructure as code. It focuses on configuration management and provides a scalable solution for infrastructure provisioning and application deployment.

Puppet: Puppet is a configuration management tool that helps you define and automate infrastructure configurations. It provides a declarative language to describe the desired state of your infrastructure and ensures that the actual state matches the desired state.

Ansible: Ansible is an open-source automation tool that allows you to define and orchestrate infrastructure configurations using simple, human-readable YAML files. It emphasizes simplicity and ease of use, making it a popular choice for infrastructure automation.

VMware Cloud Service Provider Program

The VMware Cloud Service Provider Program (CSP) allows service providers to build and operate their own cloud environments using VMware’s virtualization and cloud infrastructure technologies like VMware Cloud Director, Cloud Director availability etc… This program enables service providers to offer a wide range of cloud services, including infrastructure-as-a-service (IaaS), disaster recovery, Application as a Service, Kubernetes as a Service, DBaaS and many other managed services.

Integrating Terraform with VMware Cloud Director

To leverage the power of Infrastructure as Code in VMware Cloud Director, you can integrate it with Terraform. Terraform provides a declarative language for defining infrastructure configurations and a consistent workflow for provisioning and managing resources across different cloud providers, including VMware Cloud Director. By combining Terraform with VMware Cloud Director, Cloud Providers/Customers can:

Automate infrastructure provisioning: With Terraform, Cloud providers can define Tenant virtual data center, virtual machines, networks, and other resources as code. This allows for automated and consistent provisioning of infrastructure resources, eliminating manual configuration. I wrote a blog article on how to start with Cloud Director & Terraform which can be found here:

Ensure consistency and repeatability: Infrastructure configurations defined in Terraform files can be version controlled, allowing for easy tracking of changes and ensuring consistency across different environments.

Collaborate effectively: Terraform code can be shared and collaboratively developed within teams, enabling better collaboration and knowledge sharing among IT professionals.

Enable infrastructure as self-service: By integrating Terraform with VMware Cloud Director, organizations can provide a self-service portal for users to provision and manage their own infrastructure resources, reducing dependency on IT support.

Implementing Infrastructure as Code with VMware Cloud Director and Terraform

To begin with the Terraform configuration, create a main.tf file and specify the version of the VMware Cloud Director Terraform provider

terraform {
  required_providers {
    vcd = {
      source  = "vmware/vcd"
      version = "3.9.0"
    }
  }
}

provider "vcd" {
  url                  = "https://vcd-01a.corp.local/"
  user                 = "admin"
  password             = "******"
  org                  = "tfcloud"
  vdc                  = "tfcloud"
}

In the above configuration, replace the url, user, password, org, and vdc values with your own VMware Cloud Director details.Now, let’s define the infrastructure resources using Terraform code. Here’s an example of creating a organization in VMware Cloud Director:

#Create a new org name "tfcloud"
resource "vcd_org" "tfcloud" {
  name              = "terraform_cloud"
  full_name         = "Org created by Terraform"
  is_enabled        = "true"
  stored_vm_quota   = 50
  deployed_vm_quota = 50
  delete_force      = "true"
  delete_recursive  = "true"
  vapp_lease {
    maximum_runtime_lease_in_sec          = 0
    power_off_on_runtime_lease_expiration = false
    maximum_storage_lease_in_sec          = 0
    delete_on_storage_lease_expiration    = false
  }
  vapp_template_lease {
    maximum_storage_lease_in_sec       = 0
    delete_on_storage_lease_expiration = false
  }
}

Creating Organisation Administrator

#Create a new Organization Admin
resource "vcd_org_user" "tfcloud-admin" {
  org               = vcd_org.tfcloud.name
  name              = "tfcloud-admin"
  password          = "*********"
  role              = "Organization Administrator"
  enabled           = true
  take_ownership    = true
  provider_type     = "INTEGRATED" #INTEGRATED, SAML, OAUTH stored_vm_quota = 50 deployed_vm_quota = 50 }

Creating Org VDC

# Create Org VDC for above org
resource "vcd_org_vdc" "vdc-tfcloud" {
  name = "vdc-tfcloud"
  org  = vcd_org.tfcloud.name
  allocation_model  = "AllocationVApp"
  provider_vdc_name = "vCD-A-pVDC-01"
  network_pool_name = "vCD-VXLAN-Network-Pool"
  network_quota     = 50
  compute_capacity {
    cpu {
      limit = 0
    }
    memory {
      limit = 0
    }
  }
  storage_profile {
    name    = "*"
    enabled = true
    limit   = 0
    default = true
  }
  enabled                  = true
  enable_thin_provisioning = true
  enable_fast_provisioning = true
  delete_force             = true
  delete_recursive         = true
}

Creating Edge Gateway (NAT Gateway)

# Create Org VDC Edge for above org VDC
resource "vcd_edgegateway" "gw-tfcloud" {
  org                     = vcd_org.tfcloud.name
  vdc                     = vcd_org_vdc.vdc-tfcloud.name
  name                    = "gw-tfcloud"
  description             = "tfcloud edge gateway"
  configuration           = "compact"
  advanced                = true
  external_network {
     name = vcd_external_network.extnet-tfcloud.name
     subnet {
        ip_address            = "10.120.30.11"
        gateway               = "10.120.30.1"
        netmask               = "255.255.255.0"
        use_for_default_route = true
    }
  }
}

Here is the blog post which covers Tenant OnBoarding on Cloud Director using Terraform:

Best Practices for Infrastructure as Code with VMware Cloud Director

Implementing Infrastructure as Code with VMware Cloud Director and Terraform requires adherence to best practices to ensure efficient and reliable deployments. Here are some key best practices to consider:

Use version control: Store your Terraform code in a version control system such as Git to track changes, collaborate with teammates, and easily roll back to previous configurations if needed.

Leverage modules: Use Terraform modules to modularize your infrastructure code and promote reusability. Modules allow you to encapsulate and share common configurations, making it easier to manage and scale your infrastructure.

Implement testing and validation: Create automated tests and validations for your infrastructure code to catch any potential errors or misconfigurations before deployment. This helps ensure the reliability and stability of your infrastructure.

Implement a CI/CD pipeline: Integrate your Terraform code with a continuous integration and continuous deployment (CI/CD) pipeline to automate the testing, deployment, and management of your infrastructure. This helps in maintaining consistency and enables faster and more reliable deployments.

Use variables and parameterization: Leverage Terraform variables and parameterization techniques to make your infrastructure code more flexible and reusable. This allows you to easily customize your deployments for different environments or configurations.

Implement security best practices: Follow security best practices when defining your infrastructure code. This includes managing access controls, encrypting sensitive data, and regularly updating your infrastructure to address any security vulnerabilities.

Conclusion

Infrastructure as Code offers a transformative approach to managing infrastructure resources, enabling organizations to automate and streamline their IT operations. By integrating Terraform with VMware Cloud Director, organizations can leverage the power of Infrastructure as Code to provision, manage, and scale their infrastructure efficiently and consistently.

In this blog post, we explored the concept of Infrastructure as Code, the benefits and challenges associated with its adoption. We also provided a step-by-step guide on implementing Infrastructure as Code with VMware Cloud Director using Terraform, along with best practices.

By embracing Infrastructure as Code, Cloud providers and their Tenants can unlock the full potential of their infrastructure resources, accelerate their digital transformation journey, and ensure agility, scalability, and reliability in their IT operations.

Few More Links:

https://developer.vmware.com/samples/7210/vcd-terraform-examples

https://registry.terraform.io/providers/vmware/vcd/latest/docs

https://blogs.vmware.com/cloudprovider/2021/03/terraform-vmware-cloud-director-provider-3-2-0-release.html

Harness the Power of Cloud Director Data Solutions: Offer DBaaS using VMware SQL with MySQL

Featured

In the ever-evolving landscape of cloud technology, Cloud Service Providers (CSPs) play a crucial role in helping businesses unlock the potential of their data. Database as a Service (DBaaS) offerings have become essential tools for organizations looking to streamline their data management processes. This article will explore how our CSP is revolutionizing DBaaS with Cloud Director Data Solutions, powered by VMware SQL with MySQL. Discover how this powerful combination can transform CSP and customers’ data management experience.

VMware Cloud Director Extension for Data Solutions: An Introduction

The VMware Cloud Director Extension for Data Solutions is a game-changing plug-in for the VMware Cloud Director. By incorporating data and messaging services into the VMware Cloud Director portfolio, it enables cloud providers and their tenants to access and manage services such as VMware SQL with MySQL, VMware SQL with PostgreSQL, and the efficient messaging system, RabbitMQ.

The extension operates in conjunction with Container Service Extension 4.0 or later, facilitating the publication of popular data and messaging services to tenants. These services are deployed in a Tanzu Kubernetes Grid Cluster, which is managed by Container Service Extension 4.0 or later. This powerful combination simplifies the process of deploying and managing these services, while also offering data analytics and monitoring through Grafana and Prometheus.

How the VMware Cloud Director Extension for Data Solutions Functions

The VMware Cloud Director Extension for Data Solutions works hand-in-hand with the Container Service Extension 4.x to provide cloud providers with the ability to publish data and messaging services to their tenants. In turn, tenants can utilize these services for building new applications or maintaining existing ones.

Services are deployed within a Tanzu Kubernetes Grid Cluster, which is managed by the Container Service Extension 4.0 or later. This plays a crucial role in the deployment of services, with a service operator installed in a selected tenant Tanzu Kubernetes Cluster responsible for managing the entire lifecycle of a service, from inception to dissolution.

To better understand the architecture of the VMware Cloud Director Extension for Data Solutions, consider the high-level diagram provided below.

Use Cases for the VMware Cloud Director Extension for Data Solutions

Tenants can utilize the VMware Cloud Director Extension for Data Solutions for various purposes, such as creating database and messaging services at scale. Once these services are available in a tenant organization, authorized tenant users can create, upgrade, or delete PostgreSQL, MySQL, or RabbitMQ services. Advanced settings, such as enabling High Availability for VMware SQL with MySQL and VMware SQL with PostgreSQL, can be applied during the creation of a service.

In addition to creating database and messaging services, tenants can effortlessly manage their upgrades. When a newer version becomes available, the VMware Cloud Director Extension for Data Solutions interface will notify the user and prompt them to take action. Tenant administrators or users can then upgrade the chosen service with a single click, safeguarding the service against vulnerabilities and ensuring its stability.

Tenant self-service UI for the lifecycle management of VMware SQL with MySQL

Step:1 – Installing VMware Cloud Director Data Solutions operator

To run VMware SQL with MySQL in a Kubernetes cluster using the VMware Cloud Director extension for Data Solutions, you must install a VMware Cloud Director extension for the Data Solutions operator to a Kubernetes cluster. The VMware Cloud Director Data Solutions operator (DSO) is a backend service running within each tenant Kubernetes cluster. DSO manages the lifecycle of user resources in Kubernetes clusters upon user requests, sent through VMware Cloud Director Resource Defined Entities. The resources include both data solution operators like VMware RabbitMQ operators for Kubernetes and data solution instances like VMware RabbitMQ for Kubernetes. It deploys, upgrades, and updates various data solutions in Kubernetes clusters on behalf of the user.

  • Log in to VMware Cloud Director extension for Data Solutions from VMware Cloud Director.
  • Click Settings > Kubernetes Clusters.
  • Select the Kubernetes cluster where you want to run VMware Cloud Director extension for Data Solutions, and click Install Operator.
  • It takes a few minutes for the Operator Status of the cluster to change to Active.

Step:2 – Installing VMware SQL with MySQL

  • Log in to VMware Cloud Director extension for Data Solutions from VMware Cloud Director.
  • Click Instances > New Instance.
  • Enter the necessary details.
    • Enter the instance name.
    • Select the solution, for which you want to create an instance.
    • Select the Kubernetes cluster for this instance. 
    • you must enter only a default passwordSelect a sizing template for this instance.
  • To customize more details click Show Advanced Settings
  • To connect to a MySQL instance from outside of your Kubernetes cluster, we must configure the Kubernetes service for the instance to be of type “Load Balancer”. Select “Expose Service by Load Balancer”
  • Click Create.
  • You can also track the progress of the deployment by running:
#kubectl get pods -n vcd-ds-workloads
  • After a few minutes, you should see MySQL status is “Running” in vCD GUI as well
  • You can continue to see progress in the vCD taskbar as well as the Monitor section of vCD GUI, it is automatically creating required persistent volumes as well as Network services like Load Balancer and NAT rules
  • Kubernetes will request and Cloud Director then allocate an external IP address (load balancer IP) to the service, which we can use to connect to the MySQL service from outside the cluster. You can view the Load Balancer IP by running:
#kubectl get svc -A
  • Take note of the External IP, which in this case is 172.16.2.6, we will use this IP to connect to the MySQL cluster from outside

Step:3 – Connecting to VMware SQL with MySQL

To connect to a MySQL service using Workbench, you can follow these steps:

  • Download and install MySQL Workbench from the official MySQL website if you haven’t already.
  • Launch MySQL Workbench on your computer.
  • In the Workbench home screen, click on the “+” icon next to “MySQL Connections” to create a new connection.
  • In the “Setup New Connection” dialog, enter a connection name of your choice.
  • Configure the following settings:
1-Connection Method: Standard TCP/IP
2-MySQL Hostname: External IP address of the MySQL server.in this case is 172.16.2.6
3-MySQL Server Port: Enter the port number(default is 3306).
4-Username: Enter the MySQL username as “mysqlappuser”
5-Password: Enter the MySQL password as you entered while deploying MySQL
  • Click on the “Test Connection” button to check if the connection is successful. If the test is successful, you should see a success message.
  • Click on the “OK” button to save the connection settings.

After following these steps, you should be connected to your MySQL service using MySQL Workbench. You can then use the Workbench interface to view this newly deployed instance, run queries, and perform various read-only database-related tasks, because:

When Tenant creates a MySQL deployment through VMware Cloud Director for Data Solutions extension, the MySQL instance gets a default DB user named “mysqlappuser”, This “mysqlappuser” user’s privilege is limited to the default DB instance. “mysqlappuser” user does not have enough permission to create a database, we need to create a new user which has enough permissions to create a database.

Step:4 – Enable a full-privileged DB user for your MySQL instance provisioned by the VMware Cloud Director Extension for Data Solutions

MySQL by default has a built-in “root” user with administrator privilege in every MySQL deployment, but it doesn’t support external access. We will use this user to create another full-privileged DB user.

  • Find DB root user’s password by using below commands:
# kubectl get secret -n vcd-ds-workloads
#kubectl get secret mysqldb01-credentials -n  vcd-ds-workloads -o jsonpath='{.data}'
  • Take note of “rootPassword” from the output of the above command and copy the full encrypted password, which in this case starts with “RE1n**********”
  • Decode the above “rootPassword” using below command
# echo "RE1nem90TmltZ2laQWdsZC1oqM0VGRw==" | base64 –decode
  • The output of the above command will be the root password, use this password to log in.
  • Enter the primary (writable) pod of your MySQL deployment by command: (Refer to this Link to identify Writable Pod )
# kubectl exec -it mysqldb01-2 -n vcd-ds-workloads -c mysql – bash
  • Now run the below command to connect to the MySQL instance:
1 - Login to SQL 
      #mysql -u root -p$ADMIN_PASSWORD -h 127.0.0.1 -P 3306
2 - Enter decoded password, once you successfully connected to the MySQL
3 - Create a local MySQL user using:
      #CREATE USER 'user01'@'%' IDENTIFIED BY 'password';
      #GRANT ALL PRIVILEGES ON * . * TO 'user01'@'%';
  • That’s it, now we can connect to this SQL Instance and can create a new database as you can see below screenshot

NOTE: Thanks to the writer of this blog, https://realpars.com/mysql/, I am using this database for this blog article.

In Summary, by leveraging VMware Cloud Director Extension for Data Solutions, CSPs can unlock the power of DBaaS using VMware SQL with MySQL, offering customers a comprehensive and efficient platform for their data management needs. With simplified deployment and management, seamless scalability, enhanced performance optimization, high availability, and robust security and compliance features, CSPs can provide businesses with a reliable and scalable DBaaS solution. Embrace the potential of DBaaS with VMware Cloud Director Extension for Data Solutions and VMware SQL with MySQL, and empower your customers with streamlined data management capabilities in the cloud.

Assess Your Sovereign Cloud Stack for Compliance

Featured

VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud is a management pack available in the VMware Marketplace. You can download and install this management pack on an instance of vRealize (ARIA) Operations to automatically assess a Sovereign Cloud stack for compliance. 

VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud is intended to be used by the VMware Cloud Service Partners who are part of the Sovereign Cloud Initiative. The following products in the Sovereign Cloud stack are currently supported for compliance assessment:

  • vSphere
  • NSX-T
  • VMware Cloud Director
  • VMware Cloud Director Availability

For every Sovereign Cloud instance, providers need one instance of vRealize (ARIA) Operations with the VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud installed and configured. The compliance score card is available in the Optimize > Compliance screen of vRealize (ARIA) Operations.

Compliance Pack for Sovereign Cloud Controls Rules

The compliance rules are based on a checklist that VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud utilizes to monitor the products in the Sovereign Cloud stack. The checklist is based on the Sovereign Cloud Framework which takes into consideration the following key principles:

Data Sovereignty and Jurisdictional Control

Data should reside locally.
The cloud should be managed and governed locally, and all data processing including API calls should happen within the country/geography.
Data should be accessible only to residents of the same country, and the data should not be accessible under foreign laws or from any outside geography.

Data Access and Integrity

Two data center locations.
File, Block, and Object store options
Backup services, Disaster Recovery
Low-latency connectivity, Micro segmentation

Data Security and Compliance

Industry recognized Security Controls (minimum ISO/IEC 27001 or equivalent)
Additional relevant industry or governmental certifications
Third-party audits & Zero Trust Security & Encryption
Catalog of trusted images using the sovereign repository
Support for air gapped zones/regions
Operating personnel requirements and security clearance

Data Independence and Interoperability

Workload migration with bi-directional workload portability
Modern application architecture using containers
Support for hybrid cloud deployments

Control Rules and Product Control Set for vSphere

The vSphere control set is available for version 7, and 6.5/6.7 separately and details can be Here

Control Rules and product control set for nsx-t

Control rules and product control set for NSX-T and details can be found Here. The NSX-T version supported is greater than or equal to 3.2.x.

Controls Rules and Product Control Set for VMware Cloud Director

Control rules and product control set for VMware Cloud Director and details can be found Here. The supported version of the VMware Cloud Directory management pack is 8.10.2.

Control Rules and Product Control Set for VMware Cloud Director Availability

Controls rules and product control set for VMware Cloud Director Availability and details can be found Here. The version of the VMware Cloud Director Availability management pack supported is 1.2.1.

Install the VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud

The VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud consists of a PAK file that contains default contains views, reports, alerts and symptoms for the VMware software in the Sovereign Cloud stack.

  • Download the PAK file for VMware vRealize Operations Compliance Pack for Sovereign Cloud from the VMware Marketplace, and save the file to a temporary folder on your local system.
  • Log in to the vRealize (ARIA) Operations user interface with administrator privileges. Installation of this management pack is to be done by VMware Cloud Provider Program Partners.
  • In the left pane of vRealize (ARIA) Operations, click Integrations under Data Sources.
  • In the Repository tab, click the ADD button.
  • The Add Solution dialog box opens , Click BROWSE to locate the temporary folder on your system, and select the PAK file.
  • Read and select the checkboxes if required, Click Upload. The upload might take several minutes
  • Read and accept the EULA, and click Next. Installation details appear in the window during the process.
  • When the installation is completed, click Finish.

in the vROps instance, go to Optimize > Compliance. In the VMware Cloud tab, you can see the VMware Sovereign Cloud Compliance card in the VMware Sovereign Cloud Benchmarks section. Click Enable.

When you click Enable, a list of policies appears. You must select a policy that you want to apply. i am selecting default policy here

With the vRealize (ARIA) Operations reporting functions, you can generate a report to view the compliance status of your Sovereign Cloud. You can download the report in a PDF or CSV file format for future and offline needs.

Accessing Compliance Reports

In the VMware vRealize (ARIA) Operations Compliance Pack for Sovereign Cloud two kinds of reports are available:

At a VMware Cloud Provider Program Partner level – This report considers all the infrastructure level resources and generates a report at the org level, showing non-compliance. The compliance data is for the org and associated child hierarchy. Includes compliance details for org, org-vdc, virtual machines, logical switches, and logical routers as per the hierarchy.

From the left menu, click Visualize > Reports and run the report – [vCloud Director] – VMware Sovereign Cloud – Non-Compliance Report.

From the Reports panel, click Generated Reports, To select a generated report from the list, click the vertical ellipsis against the [vCloud Director] – VMware Sovereign Cloud – Non-Compliance Report report and select options such as run and delete.

At a tenant level – The VMware Cloud Provider Program Partner generates a filtered report that the tenant can access. This report is at an org-VDC level. The compliance data is for the child hierarchy only. Includes compliance details for virtual machines, logical switches, and logical routers as per the hierarchy and Tenants can view the generated reports in the VMware Chargeback console by logging in and navigating to Reports > Tenant Reports and clicking the Generated Reports tab.

Custom Benchmarks

In case the CSP partner/customer requires they can create a custom compliance benchmark to ensure that objects comply with compliance alerts available in vRealize (ARIA) Operations, or custom compliance alert definitions. When a compliance alert is triggered on your vCenter instance, hosts, virtual machines, distributed port groups, or distributed switches, you investigate the compliance violation. You can add up to five custom compliance scorecards

This is the first version of the sovereign cloud compliance pack for ARIA Operations for our Cloud Providers brings a comprehensive compliance checklist that encompasses the sovereign framework as a benchmark and continuously validates applications and infrastructure to help partners maintain their sovereignty posture. This brings extended capabilities to ARIA Operations, which, now addition to the capacity, cost, and performance monitoring, will monitor and report compliance drifts to the right stakeholders to better manage their compliance structure.

Cloud Director Container Service Extension – Tanzu Contour, Prometheus and Grafana Install Guide

Featured

This post explains how to install and access Tanzu Contour, Promethous and Grafana on Tanzu clusters deployed by Cloud Director Container Service extension. so to get started first ensure TANZU CLI is installed on your local machine, if not then you can install by following documentation given Here

Next thing you need is the kubeconfig file of your target TKG cluster which is reachable from your local client machine on which you have installed Tanzu CLI, also make sure you run:

# tanzu init 

Installation Steps

NOTE: CSE4 provisioned TKG cluster, cert-manager, kapp-controller, secretgren-controller and tanzu-standard package repository already have been installed. so you can skip step1,2 and 3.

Step:1- Install kapp-controller

kapp-controller gives us a flexible way to fetch, template, and deploy our applications to Kubernetes. It will also keep our apps continuously up to date when the configuration in our source repository changes. Install kapp-controller in the cluster using:

#kubectl apply -f https://github.com/vmware-tanzu/carvel-kapp-controller/releases/latest/download/release.yml

Step:2- Install secretgren-controller

In order to manage secrets across namespaces, Tanzu utilizes the carvel secret-gen-controller. you can install secretgren-controller in the cluster using:

#kubectl apply -f https://github.com/vmware-tanzu/carvel-secretgen-controller/releases/latest/download/release.yml

Step:3- Install cert-manager

Install cert-controller in the cluster using:

#kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml

Verify Tanzu Packages

Using the Tanzu CLI, you can install packages from the built-in tanzu-standard package repository or from other package repositories that you add to your target cluster, such as the Tanzu Application Platform Repository. Install tanzu-standard package repository v1.6.0.

#tanzu package repository add tanzu-standard --url projects.registry.vmware.com/tkg/packages/standard/repo:v1.6.0

Verify that the Prometheus package is available in your Tanzu K8s cluster as well as retrieve the version of the available package:

#tanzu package available list prometheus.tanzu.vmware.com -A

Verify that the Contour package is available in your Tanzu K8s cluster as well as retrieve the version of the available package:

#tanzu package available list contour.tanzu.vmware.com -A

verify that the Grafana package is available in your Tanzu K8s cluster as well as retrieve the version of the available package:

#tanzu package available list grafana.tanzu.vmware.com -A

Step4:- Implement Ingress Control with Contour

Contour is a Kubernetes ingress controller that uses the Envoy edge and service proxy. Tanzu Kubernetes Grid includes signed binaries for Contour and Envoy, which you can deploy into Tanzu Kubernetes (workload) clusters to provide ingress control services in those clusters.

You must first create the configuration file that will be used when you install the Contour package and then install the package. you can generate config file using:

#tanzu package available get contour.tanzu.vmware.com/1.20.2+vmware.1-tkg.1 --values-schema

#tanzu package available get contour.tanzu.vmware.com/PACKAGE-VERSION -generate-default-values-file

I am using below using data-values.yaml for contour

envoy:
  service:
    type: LoadBalancer
  hostPorts:
    enable: false
  hostNetwork: false
certificates:
  useCertManager: true

Install the package as below:

#tanzu package install contour --package-name contour.tanzu.vmware.com --version 1.20.2+vmware.1-tkg.1 --values-file contour-data-values.yaml

Step 5:- Deploy Prometheus

Prometheus is an open-source systems monitoring and alerting toolkit. Tanzu Kubernetes Grid includes signed binaries for Prometheus that you can deploy on workload clusters to monitor cluster health and services.verify the configuration file using below commands, this file configures the Prometheus package.

#tanzu package available get prometheus.tanzu.vmware.com/2.36.2+vmware.1-tkg.1 --values-schema
#tanzu package available get prometheus.tanzu.vmware.com/PACKAGE-VERSION --generate-default-values-file

This command lists configuration parameters of the Grafana package and their default values. You can use the output to update your prometheus-data-values.yaml file, I have used below config file which is hosted on git, if you want you can download and use and in my config file ingress is enabled in the yaml which means it works with ingress.

https://raw.githubusercontent.com/avnish80/prometheus/main/prometheus-data-values.yaml

Install/update/delete prometheus pkg using below commands..

#tanzu package install prometheus --package-name prometheus.tanzu.vmware.com --version 2.36.2+vmware.1-tkg.1 --values-file prometheus-data-values.yaml

#tanzu package installed update prometheus --values-file prometheus-data-values.yaml
 
#tanzu package installed delete prometheus

Step 6:- Deploy Grafana

Grafana is open-source software that allows you to visualize and analyze metrics data collected by Prometheus on your clusters. Tanzu Kubernetes Grid includes a Grafana package that you can deploy on your Tanzu Kubernetes clusters. verify the configuration file, this file configures the Grafana package..

#tanzu package available get grafana.tanzu.vmware.com/7.5.16+vmware.1-tkg.1 --values-schema

##tanzu package available get grafana.tanzu.vmware.com/PACKAGE-VERSION --generate-default-values-file

This command lists configuration parameters of the Grafana package and their default values. You can use the output to update your grafana-data-values.yml file, I have used below config file which is hosted on git, if you want you can download and use and in my config file ingress is enabled in the yaml which means it works with ingress.

https://raw.githubusercontent.com/avnish80/grafana/main/grafana-data-values.yaml

#tanzu package install grafana --package-name grafana.tanzu.vmware.com --version 7.5.16+vmware.1-tkg.1 --values-file grafana-data-values.yaml
 
#tanzu package installed update grafana --values-file grafana-data-values.yaml
 
#tanzu package installed delete grafana

Access the Grafana Dashboard

After Grafana is deployed, the grafana package creates a Contour HTTPProxy object with a Fully Qualified Domain Name (FQDN) of grafana.system.tanzu. To use this FQDN to access the Grafana dashboard, Use the IP address of the LoadBalancer for the Envoy service in the tanzu-system-ingress namespace.

In case FQDN to access the Grafana dashboard does not work

  1. Create an entry in your local /etc/hosts file that points an IP address to this FQDN:
  2. Use the IP address of the LoadBalancer for the Envoy service in the tanzu-system-ingress namespace.
  3. Navigate to https://grafana.system.tanzu.

Another issue is because the site uses self-signed certificates, you might need to navigate through a browser-specific security warning before you are able to access the dashboard.

Security VDC Architecture with VMware Cloud Director

Featured

​Cloud Director VDCs come with all the features you’d expect from a public cloud, Virtual Data Center is a logical representation of a physical data center, created using virtualization technologies and a virtual data center allows IT administrators to create, provision, and manage virtualized resources such as servers, storage, and networking in a flexible and efficient manner. Recently released new version of VMware Cloud Director 10.4.1 released quite a lot of new features. In this article I want to double click on to external networking…

External Networks

An external network is a network that is external to the VCD infrastructure, such as a physical network or a virtual network, external networks are used to connect virtual machines in VCD with the external world, allowing them to communicate with the Internet or with other networks that are not part of the VCD infrastructure

New Features in Cloud Director 10.4.1 External Networks

With the release of Cloud Director 10.4.1, External networks that are NSX-T Segment backed (VLAN or overlay) can now be connected directly to Org VDC Edge and does not require routed through Tier-0 or VRF the Org VDC Gateway is connected to. This connection is done via the service interface on the service node of the Tier-1 GW that is backing the Org VDC Edge GW. The Org VDC Edge GW still needs a parent Tier-0/VRF although it can be disconnected from it. here are some of the use cases some of the use cases of the external network we are going to discusses…

  • Transit connectivity across multiple Org VDC Edge Gateways to route between different Org VDCs
  • Routed connectivity via dedicated VLAN to tenant’s co-location physical servers
  • Connectivity towards Partner Service Networks
  • MPLS connectivity to direct connect while internet is accessible via shared provider gateway

Security VDC Architecture using External Networks (transit connectivity across two Org VDC Edge Gateways)

A Security VDC is a common strategy for connecting multiple VDCs and security VDC become single egress and ingress points as well as can deploy additional firewall etc, in below section i am showing how that can be achieved using new external network feature:

  • This is using Overlay (Geneve) backed external network
  • ​This Overlay network must be prepared by provider in NSX as well as in Cloud Director
  • ​NSX Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes
  • ​Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW
  • Set default route to the segment backed external network you need to use two more specific routes. For example:
  • Ø0.0.0.0/1 next hop <IP> scope <network>
  • Ø128.0.0.0/1 next hop <IP> scope <network>

Security VDC Architecture using External Networks – Multi-VDC

  1. Log in to the VMware Cloud Director Service Provider Admin Portal.
  2. From the top navigation bar, select Resources and click Cloud Resources.
  3. In the left pane, click External Networks and click New.

On the Backing Type page, select NSX-T Segments and a registered NSX Manager instance to back the network, and click Next.

Enter a name and a description for the new external network

Select an NSX segment from the list to import and click Next. An NSX segment can be backed either by a VLAN transport zone or by an overlay transport zone

  1. Configure at least one subnet and click Next.
    1. To add a subnet, click New.
    2. Enter a gateway CIDR.
    3. To enable, select the State checkbox.
    4. Configure a static IP pool by adding at least one IP range or IP address.
    5. Click Save.
    6. (Optional) To add another subnet, repeat steps Step A to Step E.

Review the network settings and click Finish.

Now provider will need to go to tenant org/vdc and add above configured external network in to tenant tier1 edge and offer net new networking configuration and options.

Other Patterns

Routed connectivity via dedicated VLAN to tenant’s co-location physical servers or MPLS

  • This is using vLAN backed external network
  • ​This vLAN backed network must be prepared by provider in NSX as well as in Cloud Director
  • ​NSX Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes
  • ​One VLAN segment, it can be connected to only one Org VDC Edge GW
  • ​Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW
  • Set default route to the segment backed external network. For example:
  • Ø172.16.50.0/24 next hop <10.10.10.1> scope <external network>

Connectivity towards Partner Service Networks

  • This is using vLAN backed external network
  • ​This vLAN backed network must be prepared by provider in NSX as well as in Cloud Director
  • ​NSX Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes
  • ​One VLAN segment, it can be connected to only one Org VDC Edge GW
  • ​Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW
  • Set static route to the segment backed external network you need to use two more specific routes. For example:
  • Ø<Service Network> next hop <Service Network Router IP> scope <External Network>

DOWNLOAD PDF from Here

i hope this article helps providers offer net new additional network capabilities to your tenants. please feel free to share feedback.

Getting Started with VMware Cloud Director Container Service Extension 4.0

Featured

VMware Cloud Director Container Service Extension brings Kubernetes as a service to VMware Cloud Director, offering multi-tenant, VMware supported, production ready, and compatible Kubernetes services with Tanzu Kubernetes Grid. As a service provider administrator, you can add the service to your existing VMware Cloud Director tenants. By using VMware Cloud Director Container Service Extension, customers can also use Tanzu products and services such as Tanzu® Mission Control to manage their clusters.

Pre-requisite for Container Service Extension 4.0

  • Provider Specific Organization – Before you can configure VMware Cloud Director Container Service Extension server, it is must to create an organization to hosts VMware Cloud Director Container Service Extension server
  • Organization VDC within Organization – Container Service extension Appliance will be deployed in this organization virtual data center
  • Network connectivity – Network connectivity between the machine where VMware Cloud Director Container Service Extension is installed, and the VMware Cloud Director server. VMware Cloud Director Container Service Extension communicates with VMware Cloud Director using VMware Cloud Director public API endpoint
  • CSE 4.0 CPI automatically creates Load balancer, you must ensure that you have configured  NSX Advanced Load Balancer, NSX Cloud, and NSX Advanced Load Balancer Service Engine Group for tenants who need to create Tanzu Kubernetes Cluster.

Provider Configuration

With the release of VMware Cloud Director Container Service Extension 4.0, service providers can use the CSE Management tab in the Kubernetes Container Clusters UI plug-in, which demonstrate step by step process to configure the VMware Cloud Director Container Service Extension server.

Install Kubernetes Container Clusters UI plug-in for VMware Cloud Director

You can download the Kubernetes Container Clusters UI plug-in for the VMware Cloud Director Download Page and upload the plug-in to VMware Cloud Director.

NOTE: If you have previously used the Kubernetes Container Clusters plug-in with VMware Cloud Director, it is necessary to deactivate it before you can activate a newer version, as only one version of the plug-in can operate at one time in VMware Cloud Director. Once you activate a new plug-in, it is necessary to refresh your Internet browser to begin using it.

Once partner has installed plugin, The Getting Started section with in CSE Management page help providers to learn and set up VMware Cloud Director Container Service Extension in VMware Cloud Director through the Kubernetes Container Clusters UI plug-in 4.0. At very High Level this is Six Step process:

Lets start following these steps and deploy

Step:1 – This section links to the locations where providers can download the following two types of OVA files that are necessary for VMware Cloud Director Container Service Extension configuration:

NOTE- Do not download FIPS enabled templates

Step:2 – Create a catalog in VMware Cloud Director and upload VMware Cloud Director Container Service Extension OVA files that you downloaded in the step:1 into this catalog

Step:3 – This section initiates the VMware Cloud Director Container Service Extension server configuration process. In this process, you can enter details such as software versions, proxy information, and syslog location. This workflow automatically creates a Kubernetes Clusters rights bundle, CSE Admin Role role, Kubernetes Cluster Author role, and VM sizing policies. In this process, the Kubernetes Clusters rights bundle and Kubernetes Cluster Author role are automatically published to all tenants as well as following Kubernetes resource versions will be deployed

Kubernetes ResourcesSupported Versions
Cloud Provider Interface (CPI)1.2.0
Container Storage Interface (CSI)1.3.0
CAPVCD1.0.0

Step:4 – This section links to the Organization VDCs section in VMware Cloud Director, where you can assign VM sizing policies to customer organization VDCs. To avoid resource limit errors in clusters, it is necessary to add Tanzu Kubernetes Grid VM sizing policies to organization virtual data centers.The Tanzu Kubernetes Grid VM sizing policies are automatically created in the previous step. Policies created are as below:

Sizing PolicyDescriptionValues
TKG smallSmall VM sizing policy2 CPU, 4 GB memory
TKG mediumMedium VM sizing policy2 CPU, 8 GB memory
TKG largeLarge VM sizing policy4 CPU, 16 GB memory
TKG extra-largeX-large VM sizing policy8 CPU, 32 GB memory

NOTE: Providers can create more policies manually based on requirement and publish to tenants

In VMware Cloud Director UI, select an organization VDC, and from the left panel, under Policies, select VM Sizing and Click Add and then from the data grid, select the Tanzu Kubernetes Grid sizing policy you want to add to the organization, and click OK.

Step:5 – This section links to the Users section in VMware Cloud Director, where you can create a user with the CSE Admin Role role. This role grants administration privileges to the user for VMware Cloud Director Container Service Extension administrative purposes. You can use this user account as OVA deployment parameters when you start the VMware Cloud Director Container Service Extension server.

Step:6 – This section links to the vApps section in VMware Cloud Director where you can create a vApp from the uploaded VMware Cloud Director Container Service Extension server OVA file to start the VMware Cloud Director Container Service Extension server.

  • Create a vApp from VMware Cloud Director Container Service Extension server OVA file.
  • Configure the VMware Cloud Director Container Service Extension server vApp deployment lease
  • Power on the VMware Cloud Director Container Service Extension server.

Container Service Extension OVA deployment

Enter a vApp name, optionally a description, runtime lease and storage lease (should be no lease so that it does not suspend automatically), and click Next.

Select a virtual data center and review the default configuration for resources, compute policies, hardware, networking, and edit details where necessary.

  • In the Custom Properties window, configure the following settings:
    • VCD host: VMware Cloud Director URL
    • CSE service account’s username The username: CSE Admin user in the organization
    • CSE service account’s API Token: to generate API Token, login to provider session with CSE user which you created in Step:5 and then go to “User Preferences” and click on “NEW” in “ACCESS Tokens” section (When you generate an API Access Token, you must copy the token, because it appears only once. After you click OK, you cannot retrieve this token again, you can only revoke it. )

  • CSE service account’s org: The organization that the user with the CSE Admin role belongs to, and that the VMware Cloud Director Container Service Extension server deploys to.
  • CSE service vApp’s org: Name of the provider org where CSE app will be deployed

In the Virtual Applications tab, in the bottom left of the vApp, click Actions > Power > Start. this completes the vApp creation from the VMware Cloud Director Container Service Extension server OVA file. This task is the final step for service providers to perform before the VMware Cloud Director Container Service Extension server can operate and start provisioning Tanzu Kubernetes Clusters.

CSE 4.0 is packed with capabilities that address and attract developer personas with an improved feature set and simplified cluster lifecycle management. Users can now build and upgrade versions, resize, and delete K8s clusters directly from the UI making it simpler and faster to accomplish tasks than before. This completes provider section of Container Service extension in next blog post i will write about Tenant workflow.

Multi-Tenant Tanzu Data Services with VMware Cloud Director

Featured

VMware Cloud Director extension for VMware Data Solutions is a plug-in for VMware Cloud Director (VCD) that enables cloud providers expand their multi-tenant cloud infrastructure platform to deliver a portfolio of on-demand caching, messaging and database software services at massive scale. This brings in new opportunity for our Cloud Providers to offer additional cloud native developer services in addition to the VCD powered Infrastructure-as-a-Service (IaaS).

VMware Cloud Director extension for Data Solutions offers a simple tenant-facing self-service UI for the lifecycle management of below Tanzu data services with a single view across multiple instances, and with URL to individual instances for service specific management.

Tenant Self-Service Access to Data Solutions

Tenant users can access VMware Cloud Director extension for Data Solutions from VMware Cloud Director tenant portal

before tenant user can deploy any of the above solution, he/she must need to prepare their Tanzu K8s clusters deployed by CSE, basically when you click on Install Operator for a Kubernetes cluster for VMware Cloud Director extension for Data Solutions, Data Solutions operator is automatically installed to this cluster and this Data Solution Operator is for life cycle management of data services, to install operator simple log in to VMware Cloud Director extension for Data Solutions from VMware Cloud Director and then:

  1. Click Settings > Kubernetes Clusters
  2. Select the Kubernetes cluster on which you want to deploy Data Services
  3. and click Install Operator.

It takes a few minutes for the status of the cluster to change to Active.

Deploy a Tanzu Data Services instance

Go to Solutions and choose your required solution and click on “Launch”

This will take you to “Instances” page there , enter the necessary details.

  • Enter the instance name.
  • Solution should have RabbitMQ selected
  • Select the Kubernetes cluster ( You can only select cluster which has Data Solutions Operator successfully installed
  • Select a solution template (T-Shirt sizes)

To customize, for example, to configure the instance RabbitMQ Management Console or Expose Load Balancer for AMQP click Show Advanced Settings and select appropriate option.

Monitor Instance Health using Grafana

Tanzu Kubernetes Grid provides cluster monitoring services by implementing the open source Prometheus and Grafana projects. Tenant can use the Grafana Portal to get insights about the state of the RabbitMQ nodes and runtime. For this to work, Grafana must be installed on CSE 4 Tanzu Cluster.

NOTE: Follow this link for Prometheus and Grafana installation on CSE Tanzu K8s clusters.

Connecting to RabbitMQ

Since during the deployment, i have exposed RMQ as “Expose Load Balancer for AMQP”, if you take a look in vcd load balancer configuration CSE automatically exposed RMQ as load balancer VIP and a NAT rule get created, so that you can access it from outside.

Provider Configuration

Before you start using VMware Cloud Director extension for Data Solutions, you must meet certain prerequisites:

  1. VMware Cloud Director version 10.3.1 or later.
  2. Container Service Extension version 4.0 or later to your VMware Cloud Director.
  3. A client machine with MacOS or Linux, which has a network connectivity to VMware Cloud Director REST endpoint.
  4. Verify that you have obtained a VMware Data Solutions account.

Detailed instruction of installing VMware Cloud Director extension for VMware Data Solutions detailed Here.

VMware Cloud Director extension for VMware Data Solutions comes with zero additional cost to our cloud providers. Please note that the extension does not come with a cost, however, cloud providers need to report their service consumption of Data Services which do carry a cost.

VMware Cloud Director Charge Back Explained

Featured

VMware Chargeback not only enables metering and chargeback capabilities, but also provides visibility into infrastructure usage through performance and capacity dashboards for the Cloiud Providers as well as tenants.

To help Cloud Providers and tenants realise more value for every dollar they spend on infrastructure (ROI) (and in turn provide similar value to their tenants), our focus is to not only expand the coverage of services that can be priced in VMware Chargeback, but also to provide visibility into the cost of infrastructure to providers, and billing summary to organizations, clearly highlighting the cost incurred by various business units. but before we dive in further to know what’s new with this release, please note:

  • vRealize Operations Tenant App is now rebranded to VMware Chargeback.
  • VMware Chargeback is also now available as a SaaS offering, The Software-as-a-Service (SaaS) offering will be available as early access, with limited availability, with the purchase or trial of the VMware Cloud Director™ service. See, Announcing VMware Chargeback for Managed Service Providers Blog.

Creation of pricing policy based on chargeback strategy

Provider administrator can create one or more pricing policies based on how they want to chargeback their tenants. Based on the vCloud Director allocation models, each pricing policy is of the type, Allocation pool, Reservation pool, or Pay-As-You-Go

NOTE – The pricing policies apply to VMs at a minimum granularity of five minutes. The VMs that are created and deleted within the short span of five minutes will still be charged.

CPU Rate

Provider can charge the CPU rate based on GHz or vCPU Counts

  • Charge Period which indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State indicates the pricing model based on which the charges are applied and values are: Always, Only when powered on, Powered on at least once
  • Default Base Rate any base rate that provider want to charge
  • Add Slab providers can optionally charge different rates depending on the number of vCPUs used
  • Fixed Cost Fixed costs do not depend on the units of charging

Memory Rate

  • Charge Period which indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on indicates the pricing model based on which the charge is applied, values are: Usage, Allocation and Maximum from usage and allocation
  • Charge Based on Power State indicates the pricing model based on which the charges are applied and values are: Always, Only when powered on, Powered on at least once
  • Default Base Rate any base rate that provider want to charge
  • Add Slab providers can optionally charge different rates depending on the memory allocated
  • Fixed Cost Fixed costs do not depend on the units of charging

Storage Rate

You can charge for storage either based on storage policies or independent of it.

  • This way of setting rates will be deprecated in the future release and it is advisable to instead use the Storage Policy option.
  • Select the Storage Policy Name from the drop-down menu.
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on indicates the pricing model based on which the charge is applied. You can charge for used storage or configured storage of the VMs
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Add Slab you can optionally charge different rates depending on the storage allocated

Network Rate

Enter the External Network Transmit and External Network Receive rates.

Note: If your network is backed by NSX-T, you will be charged only for the network data transmit and network data receive.

  • Network Transmit Rate select the Change Period and enter the Default Base Rate as well as using slabs, you can optionally charge different rates depending on the network data consumed
  • Network Receive Rate select the Change Period and enter the Default Base Rate. as well as using slabs, you can optionally charge different rates depending on the network data consumed. Enter valid numbers for Base Rate Slab and click Add Slab.

Advanced Network Rate

Under Edge Gateway Size, enter the base rates for the corresponding edge gateway sizes

  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Enter the Base Rate

Guest OS Rate

Use the Guest OS Rate to charge differently for different operating systems

  • Enter the Guest OS Name
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

Cloud Director Availability

Cloud Director Availability is to set pricing for replications created from Cloud Director Availability

  • Replication SLA Profile – enter a replication policy name
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Enter the Base Rate

You can also charge for the storage consumed by replication objects in the Storage Usage Charge section.This is used to set additional pricing for storage used by Cloud Director Availability replications in Cloud Director. Please note that the storage usage defined in this tab will be added additionally to the Storage Policy Base Rate

vCenter Tag Rate

This section is used for Any additional charges to be applied on the VMs based on their discovered Tags from vCenter. (Typical examples are Antivirus=true, SpecialSupport=true etc)

  • Enter the Tag Category and Tag Value
  • Charge based on Fixed Rate or
  • Charge based on Alternate Pricing Policy – Select the appropriate Pricing Policy
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

VCD Metadata Rate

Use the VCD Metadata Rate to charge differently for different metadata set on vApps

NOTE- Metadata based prices are available in bills only if Enable Metadata option is enabled in vRealize Operations Management Pack for VMware Cloud Director.

  • Enter the Tag Category and Tag Value
  • Charge based on Fixed Rate or
  • Charge based on Alternate Pricing Policy – Select the appropriate Pricing Policy
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
  • Charge Based on Power State This decides if the charge should be applied based on the power state of the VM and values are: Always, Only when powered on, Powered on at least once
  • Enter the Base Rate

One Time Fixed Cost

One time fixed cost used to charge for One time incidental charges on Virtual machines, such as creation/Setup charges, or charges for one off incidents like installation of a patch. These costs do not repeat on a recurring basis.

For values follow VCD METADATA and vCenter Tag section.

Rate Factors

Rate factors are used to either bump up or discount the prices either against individual resources consumed by the Virtual Machines, or whole charges against the Virtual Machine. Some examples are:

  • Increase CPU rate by 20% (Factor 1.2) for all VMs tagged with CPUOptimized=true
  • Discount overall charge on VM by 50% (Factor 0.5) for all Vms tagged with PromotionalVM=True
  • VCD Metadata
    • enter the Tag Key and Tag Value
      • Change the price of Total, vCPU, Memory and Storage
      • By applying a factor of – increase or decrease the price by entering a valid number
  • vCenter Tag
    • enter the Tag Key and Tag Value
      • Change the price of Total, vCPU, Memory and Storage
      • By applying a factor of – increase or decrease the price by entering a valid number

Tanzu Kubernetes Clusters

This section will be used to charge for Tanzu K8s clusters and objects.

  • Cluster Fixed Cost
  • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Fixed Cost Fixed costs do not depend on the units of charging
  • Cluster CPU Rate
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Charge Based on this decides if the charge should be applied based on Usage or Allocation
    • Default Base Rate(per ghz)
  • Cluster Memory Rate
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Charge Based on this decides if the charge should be applied based on Usage or Allocation
    • Default Base Rate(per gb)

Additional Fixed Cost

You can use Additional Fixed Cost section to charge at the Org-VDC level. You can use this for charges such as overall tax, overall discounts, and so on. The charges can be applied to selective Org-VDCs based on Org-VDC metadata.

  • Fixed Cost
    • Charge Period indicates the frequency of charging and values are: Hourly, Daily Monthly
    • Fixed Cost
  • VCD Metadata – enter the Tag Key and Tag Value
  • VCD Metadata One Time – enter the Tag Key and Tag Value

Apply Policy

Cloud Director Charge Back provides flexibility to the Service Providers to map the created pricing policies with specific organization vDC. By doing this, the service provider can holistically define how each of their customers can be charged based on resource types.

Bills

Every tenant/customer of service provider can see/review their bills using the Cloud Director Charge Back app. Service Provider administrator can generate bills for a tenant by selecting a specific resource and a pricing policy that must be applied for a defined period and can also log in to review the bill details.

This completes the feature demonstration available with Cloud Director Charge back. Go ahead and deploy and add native charge back power to your Cloud. 

NFS DataStore on VMware Cloud on AWS using Amazon FSx for NetApp

Featured


Amazon FSx for NetApp ONTAP integration with VMware Cloud on AWS is an AWS-managed external NFS datastore built on NetApp’s ONTAP file system that can be attached to a cluster in your SDDC. It provides customers with flexible, high-performance virtualized storage infrastructure that scales independently of compute resources.

PROCESS

  • Make sure SDDC has been deployed on VMware Cloud on AWS with version 1.20
  • The SDDC is added to an SDDC Group. While creating the SDDC Group, a VMware Managed Transit Gateway (vTGW) is automatically deployed and configured
  • A Multi-AZ file system powered by Amazon FSx for NetApp ONTAP is deployed across two AWS Availability Zones (AZs). (You can also deploy in single AZ but not recommended for production)

DEPLOY VMWARE MANAGED TRANSIT GATEWAY

To use FSx for ONTAP as an external datastore, an SDDC must be a member of an SDDC group so that it can use the group’s vTGW and to configure you must be logged into the VMC console as a user with a VMC service role of Administrator and follow below steps:

  • Log in to the VMC Console and go on the Inventory page, click SDDC Groups
  • On the SDDC Groups tab, click ACTIONS and select Create SDDC Group
  • Give the group a Name and optional Description, then click NEXT
  • On the Membership grid, select the SDDCs to include as group members.The grid displays a list of all SDDCs in your organization. To qualify for membership in the group, an SDDC must meet several criteria:
    • It must be at SDDC version 1.11 or later. Members of a multi-region group must be at SDDC version 1.15 or later.
    • Its management network CIDR block cannot overlap the management CIDR block of any other group member.
    • It cannot be a member of another SDDC Group.
    When you have finished selecting members, click NEXT. You can edit the group later to add or remove members.
  • Acknowledge that you understand and take responsibility for the costs you incur when you create an SDDC group, then click CREATE GROUP to create the SDDC Group and its VMware Transit Connect network.

ATTACH VPC TO VMWARE MANAGED TRANSIT GATEWAY

After the SDDC Group is created, it shows up in your list of SDDC Groups. Select the SDDC Group, and then go to the External VPC tab and click on ADD ACCOUNT button, then provide the AWS account that will be used to provision the FSx file system, and then click Add.

Now it’s time for you to go back to the AWS console and sign in to the same AWS account where you will create Amazon FSx file system. Here navigate to the Resource Access Manager service page and

click on the Accept resource share button.

Next, we need to attach VMC Transit Gateway to the FSX VPC, for that you need to go to:

ATTACH VMWARE MANAGED TRANSIT GATEWAY TO VPC

  • Open the Amazon VPC console and navigate to Transit Gateway Attachments.
  • Choose Create transit gateway attachment
  • For Name tag, optionally enter a name for the transit gateway attachment.
  • For Transit gateway ID, choose the transit gateway for the attachment, make sure you choose a transit gateway that was shared with you.
  • For Attachment type, choose VPC.
  • For VPC ID, choose the VPC to attach to the transit gateway.This VPC must have at least one subnet associated with it.
  • For Subnet IDs, select one subnet for each Availability Zone to be used by the transit gateway to route traffic. You must select at least one subnet. You can select only one subnet per Availability Zone.
  • Choose Create transit gateway attachment.

Accept the Transit Gateway attachment as follows:

  • Navigating back to the SDDC Group, External VPC tab, select the AWS account ID used for creating your FSx NetApp ONTAP, and click Accept. This process takes some time..
  • Next, you need to add the routes so that the SDDC can see the FSx file system. This is done on the same External VPC tab, where you will find a table with the VPC. In that table, there is a button called Add Routes. In the Add Route section, add the CIDR of your VPC where the FSX will be deployed.

In the AWS console, create the route back to the SDDC by locating VPC on the VPC service page and navigating to the Route Table as seen below.

also ensure that you have the correct inbound rules for the SDDC Group CIDR to allow the inbound rules for SDDC Group CIDR. it this case i am using entire SDDC CIDR, Further to this Security Group, the ENI Security Group also needs the NFS port ranges adding as inbound and outbound rules to allow communication between VMware Cloud on AWS and the FSx service.

Deploy FSx for NetApp ONTAP file system in your AWS account

Next step is to create an FSx for NetApp ONTAP file system in your AWS account. To connect FSx to VMware cloud on AWS SDDC, we have two options:

  • Either create a new Amazon VPC under the same connected AWS account and connect it using VMware Transit Connect.
  • or Create a new AWS account in the same region as well as VPC, connect it using VMware Transit Connect.

In this blog, i am deploying in the same connected VPC and for it to deploy, Go to Amazon FSx service page, click on Create File System and on the Select file system type page, select Amazon FSx for NetApp ONTAP,

On Next page, select the Standard create method and enter require details like:

  • Select Deployment type (Multi-AZ) and Storage capacity
  • Select correct VPC, Security group and Subnet

After the file system is created, check the NFS IP address under the Storage virtual machines tab. The NFS IP address is the floating IP that is used to manage access between file system nodes, and this IP we will use to configuring in VMware Transit Connect to allow access volume from SDDC.

we are done with creating the FSx for NetApp ONTAP file system.

MOUNT NFS EXTERNAL STORAGE TO SDDC Cluster

Now it’s time for you to go back to the VMware Cloud on AWS console and open the Storage tab of your SDDC. Click ATTACH DATASTORE and fill in the required values.

  • Select a cluster. Cluster-1 is preselected if there are no other clusters.
  • Choose Attach a new datastore
  • The NFS IP address shown in the Endpoints section of the FSx Storage Virtual Machine tab. Click VALIDATE to validate the address and retrieve the list of mount points (NFS exports) from the server.

  • Pick one from the list of mount points exported by the server at the NFS server address. Each mount point must be added as a separate datastore
  • AWS FSx ONTAP
  • Give the datastore a name. Datastore names must be unique within an SDDC.
    • Click on ATTACH DATASTORE

VMware Cloud on AWS supports external storage starting with SDDC version 1.20. To request an upgrade to an existing SDDC, please contact VMware support or notify your Customer Success Manager.

Cross-Cloud Disaster Recovery with VMware Cloud on AWS and Azure VMware Solution

Featured

Disaster Recovery is an important aspect of any cloud deployment. It is always possible that an entire cloud data center or region of the cloud provider goes down. This has already happened to most cloud providers like Amazon AWS, Microsoft Azure, Google Cloud and will surely happen again in future. Cloud providers like Amazon AWS, Microsoft Azure and Google Cloud will readily suggest that you have a Disaster Recovery and Business Continuity strategy that spans across multiple regions, so that if a single geographic region goes down, business can continue to operate from another region. This only sounds good in theory, but there are several issues in the methodology of using the another region of a single cloud provider. Some of the key reasons which I think that single cloud provider’s Cross-Region DR will not be that effective.

  • A single Cloud Region failure might cause huge capacity issues for other regions used as DR
  • Cloud regions are not fully independent , like AWS RDS allows read replicas in other regions but one wrong entry will get replicated across read replicas which breaks the notion of “Cloud regions are independent
  • Data is better protected from accidental deletions when stored across clouds. For Example what if any malicious code or an employee or cloud providers employee runs a script which deletes all the data but in most cases this will not impact cross cloud.

In this blog post we will see how VMware cross cloud disaster recovery solution can help customers/partners to overcome BC/DR challenges.

Deployment Architecture

Here is my deployment architecture and connectivity:

  • One VMware Cloud on AWS SDDC
  • One Azure VMware Solution SDDC
  • Both SDDC’s are connected over MegaPort MCR

Activate VMware Site Recovery on VMware Cloud on AWS

To configure site recovery on VMware Cloud on AWS SDDC, go to SDDC page, click on the Add Ons tab and under the Site Recovery Add On, Click the ACTIVATE button

In the pop up window Click ACTIVATE again

This will deploy SRM on SDDC, wait for it to finish.

Deploy VMware Site Recovery Manager on Azure VMware Solution

In your Azure VMware Solution private cloud, under Manage, select Add-ons > Disaster recovery and click on “Get Started”

From the Disaster Recovery Solution drop-down, select VMware Site Recovery Manager (SRM) and provide the License key, select agree with terms and conditions, and then select Install

After the SRM appliance installs successfully, you’ll need to install the vSphere Replication appliances. Each replication server accommodates up to 200 protected VMs. Scale in or scale out as per your needs.

Move the vSphere server slider to indicate the number of replication servers you want based on the number of VMs to be protected. Then select Install

Once installed, verify that both SRM and the vSphere Replication appliances are installed.After installing VMware SRM and vSphere Replication, you need to complete the configuration and site pairing in vCenter Server.

  1. Sign in to vCenter Server as cloudadmin@vsphere.local.
  2. Navigate to Site Recovery, check the status of both vSphere Replication and VMware SRM, and then select OPEN Site Recovery to launch the client.

Configure site pairing in vCenter Server

Before starting site pair, make sure firewall rules between VMware cloud on AWS and Azure VMware solution has been opened as described Here and Here

To start pairing select NEW SITE PAIR in the Site Recovery (SR) client in the new tab that opens.

Enter the remote site details, and then select FIND VCENTER SERVER INSTANCES and select then select Remote vCenter and click on NEXT, At this point, the client should discover the VRM and SRM appliances on both sides as services to pair.

Select the appliances to pair and then select NEXT.

Review the settings and then select FINISH. If successful, the client displays another panel for the pairing. However, if unsuccessful, an alarm will be reported.

After you’ve created the site pairing, you can now view the site pairs and other related details as well as you are ready to plan for Disaster Recovery.

Planning

Mappings allow you to specify how Site Recovery Manager maps virtual machine resources on the protected site to resources on the recovery site, You can configure site-wide mappings to map objects in the vCenter Server inventory on the protected site to corresponding objects in the vCenter Server inventory on the recovery site.

  • Network Mapping
  • IP Customization
  • Folder Mapping
  • Resource Mapping
  • Storage Policy Mapping
  • Placeholder Datastores

Creating Protection Groups

A protection group is a collection of virtual machines that the Site Recovery Manager protects together. Protection group are per SDDC configuration and needs to be created on each SDDC if VMs are replicated in bi-directionally.

Recovery Plan

A recovery plan is like an automated run book. It controls every step of the recovery process, including the order in which Site Recovery Manager powers on and powers off virtual machines, the network addresses that recovered virtual machines use, and so on. Recovery plans are flexible and customizable.

A recovery plan runs a series of steps that must be performed in a specific order for a given workflow such as a planned migration or re-protection. You cannot change the order or purpose of the steps, but you can insert your own steps that display messages and run commands.

A recovery plan includes one or more protection groups. Conversely, you can include a protection group in more than one recovery plan. For example, you can create one recovery plan to handle a planned migration of services from the protected site to the recovery site for the whole SDDC and another set of plans per individual departments. Thus, having multiple recovery plans referencing one protection group allows you to decide how to perform recovery.

Steps to add a VM for replication:

there are multiple ways, i am explaining here one:

  • Choose VM and right click on it and select All Site Recovery actions and click on Configure Replication
  • Choose Target site and replication server to handle replication
  • VM validation happens and then choose Target datastore
  • under Replication setting , choose RPO, point in time instances etc..
  • Choose protection group to which you want to add this VM and check summary and click Finish

Cross-cloud disaster recovery ensures one of the most secure and reliable solutions for service availability, reason cross-cloud disaster recovery is often the best route for businesses is that it provides IT resilience and business continuity. This continuity is of most important when considering how companies operate, how customers and clients rely on them for continuous service and when looking at your company’s critical data, which you do not want to be exposed or compromised.

Frankly speaking IT disasters happen and happens everywhere including public clouds and much more frequently than you might think. When they occur, they present stressful situations which require fast action. Even with a strategic method for addressing these occurrences in place, it can seem to spin out of control. Even when posed with these situations, IT leaders must keep face, remain calm and be able to fully rely on the system they have in place or partner they are working with for disaster recovery measures.

Customer/Partner with VMware Cloud on AWS and Azure VMware Solution can build cross cloud disaster recovery solution to simplify disaster recovery with the only VMware-integrated solution that runs on any cloud. VMware Site Recovery Manager (SRM) provides policy-based management, minimizes downtime in case of disasters via automated orchestration, and enables non-disruptive testing of your disaster recovery plans.

AI/ML with VMware Cloud Director

Featured

AI/ML—short for artificial intelligence (AI) and machine learning (ML)—represents an important evolution in computer science and data processing that is quickly transforming a vast array of industries.

Why is AI/ML important?

it’s no secret that data is an increasingly important business asset, with the amount of data generated and stored globally growing at an exponential rate. Of course, collecting data is pointless if you don’t do anything with it, but these enormous floods of data are simply unmanageable without automated systems to help.

Artificial intelligence, machine learning and deep learning give organizations a way to extract value out of the troves of data they collect, delivering business insights, automating tasks and advancing system capabilities. AI/ML has the potential to transform all aspects of a business by helping them achieve measurable outcomes including:

  • Increasing customer satisfaction
  • Offering differentiated digital services
  • Optimizing existing business services
  • Automating business operations
  • Increasing revenue
  • Reducing costs

As modern applications become more prolific, Cloud Providers need to address the increasing customer demand for accelerated computing that typically requires large volumes of multiple, simultaneous computation that can be met with GPU capability.

Cloud Providers can now leverage vSphere support for NVIDIA GPUs and NVIDIA AI Enterprise (a cloud-native software suite for the development and deployment of AI and has been optimized and certified for VMware vSphere), This enables vSphere capabilities like vMotion from within Cloud Director to now deliver multi-tenancy GPU services which are key to maximizing GPU resource utilization. With Cloud Director support for the NVIDIA AI Enterprise software suite, customers now have access to best-in-class, GPU optimized AI frameworks and tools and to deliver compute intensive workloads including artificial intelligence (AI) or machine learning (ML) applications within their datacenters.

This solution with NVIDIA takes advantage of NVIDIA MIG (Multi-instance GPU) which supports spatial segmentation between workloads at the physical level inside a single device and is a big deal for multi-tenant environments driving better optimization of hardware and increased margins. Cloud Director is reliant on host pre-configuration for GPU services included in NVIDIA AI Enterprise which contains vGPU technology to enable deployment/configuration on hosts and GPU profiles.

Customers can self serve, manage and monitor their GPU accelerated hosts and virtual machines within Cloud Director. Cloud Providers are able to monitor (through vCloud API and UI dashboard) NVIDIA vGPU allocation, usage per VDC and per VM to optimize utilization and meter/bill (through vCloud API) NVIDIA vGPU usage averaged over a unit of time per tenant for tenant billing.

Provider Workflow

  • Add GPU devices to ESXi hosts in vCenter and install required drivers. 
  • Verify vGPU profiles are visible by going in to vCD provider portal → Resources → Infrastructure Resources → vGPU Profiles
  • Edit vGPU profiles to provide necessary tenant facing instructions and a tenant facing name to each vGPU profile. (Optional)
  • Create a PVDC backed by one or more clusters having GPU hosts in vCenter.
  • In provider portal → Cloud Resources → vGPU Policies → Create a new vGPU policy by following the wizards steps.

Tenant Workflow

When you create a vGPU policy, it is not visible to tenants. You can publish a vGPU policy to an organization VDC to make it available to tenants.

Publishing a vGPU policy to an organization VDC makes the policy visible to tenants. The tenant can select the policy when they create a new standalone VM or a VM from a template, edit a VM, add a VM to a vApp, and create a vApp from a vApp template. You cannot delete a vGPU policy that is available to tenants.

  • Publish the vGPU policy to one or more tenant VDCs similar to the way we publish sizing and placement policies.
  • Create a new VM or instantiate a VM from template. In vGPU enabled VDCs, tenants can now select a vGPU policy

Cloud Director not only allows for VMs but providers can also leverage cloud director’s Container Service Extension to offer GPU enabled Tanzu Kubernetes Clusters.

Step-by-Step Configuration

Below video covers step-by-step process of configuring provider and tenant side of configuration as well as deploying Tensor flow GPU in to a VM.

Persistent Volumes for Tanzu on VMware Cloud on AWS using Amazon FSx for NetApp ONTAP

Featured

Amazon FSx for NetApp ONTAP provides fully managed shared storage in the AWS Cloud with the popular data access and management capabilities of ONTAP and this blog post we are going to use these volumes mount as Persistent Volumes on Tanzu Kubernetes Clusters running on VMware Cloud on AWS

With Amazon FSx for NetApp ONTAP, you pay only for the resources you use. There are no minimum fees or set-up charges. There are five Amazon FSx for NetApp ONTAP components to consider when storing and managing your data: SSD storage, SSD IOPS, capacity pool usage, throughput capacity, and backups.

The Amazon FSx console has two options for creating a file system – Quick create option and Standard create option. To rapidly and easily create an Amazon FSx for NetApp ONTAP file system with the service recommended configuration, I use the Quick create option.

The Quick create option creates a file system with a single storage virtual machine (SVM) and one volume. The Quick create option configures this file system to allow data access from Linux instances over the Network File System (NFS) protocol.

In the Quick configuration section, for File system name – optional, enter a name for your file system.

For Deployment type choose Multi-AZ or Single-AZ.

  • Multi-AZ file systems replicate your data and support failover across multiple Availablity Zones in the same AWS Region.
  • Single-AZ file systems replicate your data and offer automatic failover within a single Availability Zone, for this post i am creating in Single AZ
  • SSD storage capacity, specify the storage capacity of your file system, in gibibytes (GiBs). Enter any whole number in the range of 1,024–196,608.
  • For Virtual Private Cloud (VPC), choose the Amazon VPC that is associate with your VMware Cloud on AWS SDDC.

Review the file system configuration shown on the Create ONTAP file system page. For your reference, note which file system settings you can modify after the file system is created.

Choose Create file system.

Quick create creates a file system with one SVM (named fsx) and one volume (named vol1). The volume has a junction path of /vol1 and a capacity pool tiering policy of Auto.

For us to use this SVM, we need to get the IP address of SVM for NFS , Click on SVM ID and take a note of this IP, we will use this IP in our NFS configurations for Tanzu.

Kubernetes NFS-Client Provisioner

NFS subdir external provisioner is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.

More Details – Explained here in detail https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner 

I am deploying this on my Tanzu Kubernetes cluster which is deployed on VMware Cloud on AWS.

  • Add the helm repo –
#helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
  • Install using as below:
#helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=<IP address of Service> \
    --set nfs.path=/<Volume Name>
#My command will be like this#
#helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=172.31.1.234 \
    --set nfs.path=/vol1

Post installation of chart, you can check the status of Pod, it is not in running state then describe and see where it stuck

Finally, Test Your Environment!

Now we’ll test your NFS subdir external provisioner by creating a persistent volume claim and a pod that writes a test file to the volume. This will make sure that the provisioner is provisioning and that the Amazon FSx for NetApp ONTAP service is reachable and writable.

As you can see deployed application created an PV and PVC successfully on Amazon FSx for NetApp ONTAP

Describe the Persistent Volume to see the source of it, as you can see below it is created on NFS running on SVM having IP – 172.31.1.234

This is the power of VMware Cloud on AWS and AWS native services, customers can use any AWS native service without worrying about egress charges as well as security as everything is being configured and accessed over the private connections.

Building Windows Custom Machine Image for Creating Tanzu Workload Clusters

Featured

If your organisation is building an application based on Windows components (such as .NET Framework) and willing to deploy Windows containers on VMware Tanzu, this blog post is on how to build a Windows custom machine image and deploy windows Kubernetes cluster.

Windows Image Prerequisites 

  • vSphere 6.7 Update 3 or greater
  • A macOS or Linux workstation, Docker Desktop and Ansible must be installed on workstation
  • Tanzu Kubernetes Grid v1.5.x or greater
  • Tanzu CLI
  • A Recent Image of Windows 2019 (newer than April 2021) and must be downloaded from Microsoft Developer Network (MSDN) or Volume Licensing (VL) account.
  • The latest VMware Tools Windows ISO image. Download from VMware Tools
  • on vCenter, Inside a data store create a folder such as iso and upload windows ISO and VMware Tools iso

Build a Windows Image 

  • Deploy Tanzu Management Cluster with Ubuntu 2004 Kubernetes v1.22.9 OVA
  • Create a YAML file named builder.yaml with the following configuration, On my local system I have saved this yaml as builder.yaml
apiVersion: v1
kind: Namespace
metadata:
 name: imagebuilder
---
apiVersion: v1
kind: Service
metadata:
 name: imagebuilder-wrs
 namespace: imagebuilder
spec:
 selector:
   app: image-builder-resource-kit
 type: NodePort
 ports:
 - port: 3000
   targetPort: 3000
   nodePort: 30008
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: image-builder-resource-kit
 namespace: imagebuilder
spec:
 selector:
   matchLabels:
     app: image-builder-resource-kit
 template:
   metadata:
     labels:
       app: image-builder-resource-kit
   spec:
     nodeSelector:
       kubernetes.io/os: linux
     containers:
     - name: windows-imagebuilder-resourcekit
       image: projects.registry.vmware.com/tkg/windows-resource-bundle:v1.22.9_vmware.1-tkg.1
       imagePullPolicy: Always
       ports:
         - containerPort: 3000

Connect the Kubernetes CLI to your management cluster by running:

#kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER

Apply the builder.yaml file as below:

To ensure the container is running run below command:

List the cluster’s nodes, with wide output and take note of Internal IP address value of the node with ROLE listed as control-plane,master

#kubectl get nodes -o wide

Retrieve the containerd component’s URL and SHA, Query the control plane’s  nodePort  endpoint:

#curl http://CONTROLPLANENODE-IP:30008

Take note of containerd.path and containerd.sha256 values. The containerd.path value ends with something like containerd/cri-containerd-v1.5.9+vmware.2.windows-amd64.tar.

Create a JSON file in an empty folder named windows.json with the following configuration:

{
 "unattend_timezone": "WINDOWS-TIMEZONE",
 "windows_updates_categories": "CriticalUpdates SecurityUpdates UpdateRollups",
 "windows_updates_kbs": "",
 "kubernetes_semver": "v1.22.9",
 "cluster": "VSPHERE-CLUSTER-NAME",
 "template": "",
 "password": "VCENTER-PASSWORD",
 "folder": "",
 "runtime": "containerd",
 "username": "VCENTER-USERNAME",
 "datastore": "DATASTORE-NAME",
 "datacenter": "DATACENTER-NAME",
 "convert_to_template": "true",
 "vmtools_iso_path": "VMTOOLS-ISO-PATH",
 "insecure_connection": "true",
 "disable_hypervisor": "false",
 "network": "NETWORK",
 "linked_clone": "false",
 "os_iso_path": "OS-ISO-PATH",
 "resource_pool": "",
 "vcenter_server": "VCENTER-IP",
 "create_snapshot": "false",
 "netbios_host_name_compatibility": "false",
 "kubernetes_base_url": "http://CONTROLPLANE-IP:30008/files/kubernetes/",
 "containerd_url": "CONTAINERD-URL",
 "containerd_sha256_windows": "CONTAINERD-SHA",
 "pause_image": "mcr.microsoft.com/oss/kubernetes/pause:3.5",
 "prepull": "false",
 "additional_prepull_images": "mcr.microsoft.com/windows/servercore:ltsc2019",
 "additional_download_files": "",
 "additional_executables": "true",
 "additional_executables_destination_path": "c:/k/antrea/",
 "additional_executables_list": "http://CONTROLPLANE-IP:30008/files/antrea-windows/antrea-windows-advanced.zip",
 "load_additional_components": "true"
}

update the values in file as below:

Add the XML file that contains the Windows settings by following these steps:

  • Go to the autounattend.xml file on VMware {code} Sample Exchange.
  • Select Download.
  • If you are using the Windows Server 2019 evaluation version, remove <ProductKey>...</ProductKey>.
  • Name the file autounattend.xml.
  • Save the file in the same folder as the windows.json file and change permission of file to 777.

From your client VM run following command from folder containing your windows.json and autounattend.xml file:

#docker run -it --rm --mount type=bind,source=$(pwd)/windows.json,target=/windows.json --mount type=bind,source=$(pwd)/autounattend.xml,target=/home/imagebuilder/packer/ova/windows/windows-2019/autounattend.xml -e PACKER_VAR_FILES="/windows.json" -e IB_OVFTOOL=1 -e IB_OVFTOOL_ARGS='--skipManifestCheck' -e PACKER_FLAGS='-force -on-error=ask' -t projects.registry.vmware.com/tkg/image-builder:v0.1.11_vmware.3 build-node-ova-vsphere-windows-2019

NOTE: Before you run below command, make sure your workstation is running “Docker Desktop” as well “Ansible”

To ensure the Windows image is ready to use, select your host or cluster in vCenter, select the VMs tab, then select VM Templates to see the Windows image listed.

Use a Windows Image for a Workload Cluster

Use a Windows Image for a Workload Cluster, below yaml shows you how to deploy a workload cluster that uses your Windows image as a template. (This windows cluster is using NSX Advance LB)

#! ---------------------------------------------------------------------
#! non proxy env configs
#! ---------------------------------------------------------------------
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: tkg-workload02
CLUSTER_PLAN: dev
ENABLE_CEIP_PARTICIPATION: 'true'
IS_WINDOWS_WORKLOAD_CLUSTER: "true"
VSPHERE_WINDOWS_TEMPLATE: windows-2019-kube-v1.22.5
ENABLE_MHC: "false"

IDENTITY_MANAGEMENT_TYPE: oidc

INFRASTRUCTURE_PROVIDER: vsphere
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: false
DEPLOY_TKG_ON_VSPHERE7: 'true'
VSPHERE_DATACENTER: /SDDC-Datacenter
VSPHERE_DATASTORE: WorkloadDatastore
VSPHERE_FOLDER: /SDDC-Datacenter/vm/tkg-vmc-workload
VSPHERE_NETWORK: /SDDC-Datacenter/network/tkgvmc-workload-segment01
VSPHERE_PASSWORD: <encoded:T1V3WXpkbStlLUlDOTBG>
VSPHERE_RESOURCE_POOL: /SDDC-Datacenter/host/Cluster-1/Resources/Compute-ResourcePool/Tanzu/tkg-vmc-workload
VSPHERE_SERVER: 10.97.1.196
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa....loudadmin@vmc.local

VSPHERE_USERNAME: cloudadmin@vmc.local
WORKER_MACHINE_COUNT: 3
VSPHERE_INSECURE: 'true'
ENABLE_AUDIT_LOGGING: 'true'
ENABLE_DEFAULT_STORAGE_CLASS: 'true'
ENABLE_AUTOSCALER: false
AVI_CONTROL_PLANE_HA_PROVIDER: 'true'
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: 3

WORKER_SIZE: small
CONTROLPLANE_SIZE: large
REMOVE_CP_TAINT: "true"

if your cluster yaml file is correct, you should see that new windows cluster has been started to deploy.

and after some time if should deploy cluster sucessfully.

In case if you are using NSX-ALB AKO or Pinniped and see that those pods are not running, please refer Here

NOTE – if you see this error during image build process : Permission denied: ‘./packer/ova/windows/windows-2019/autounattend.xml, check the permission of file autounattend.yaml

Cloud Director OIDC Configuration using OKTA IDP

Featured

OpenID Connect (OIDC) is an industry-standard authentication layer built on top of the OAuth 2.0 authorization protocol. The OAuth 2.0 protocol provides security through scoped access tokens, and OIDC provides user authentication and single sign-on (SSO) functionality. For more refer here (https://datatracker.ietf.org/doc/html/rfc6749). There are two main types of authentication that you can perform with Okta:

  • The OAuth 2.0 protocol controls authorization to access a protected resource, like your web app, native app, or API service.
  • The OpenID Connect (OIDC) protocol is built on the OAuth 2.0 protocol and helps authenticate users and convey information about them. It’s also more opinionated than plain OAuth 2.0, for example in its scope definitions.

So If you want to import users and groups from an OpenID Connect (OIDC) identity provider to your Cloud Director system (provider) or Tenant organization, you must configure provider/tenant organization with this OIDC identity provider. Imported users can log in to the system/tenant organization with the credentials established in the OIDC identity provider.

We can use VMware Workspace ONE Access (VIDM) or any public identity providers, but make sure OAuth authentication endpoint must be reachable from the VMware Cloud Director cells.in this blog post we will use OKTA OIDC and configure VMware Cloud to use this OIDC for authentication.

Step:1 – Configure OKTA OIDC

For this blog post, i created an developer account on OKTA at this url –https://developer.okta.com/signup and once account is ready, follow below steps to add cloud director as an application in OKTA console:

  • In the Admin Console, go to Applications > Applications.
  • Click Create App Integration.
  • To create an OIDC app integration, select OIDC – OpenID Connect as the Sign-in method.
  • Choose what type of application you plan to integrate with Okta, in Cloud Director case Select Web Application.
  • App integration name: Specify a name for Cloud Director
  • Logo (Optional): Add a logo to accompany your app integration in the Okta org
  • Grant type: Select from the different grant type options
  • Sign-in redirect URIs: The Sign-in redirect URI is where Okta sends the authentication response and ID token for the sign-in request, in our case for provider https://<vcd url>/login/oauth?service=provider and incase if you are doing it for tenant then use https://<vcd url>/login/oauth?service=tenant:<org name>
  • Sign-out redirect URIs: After your application contacts Okta to close the user session, Okta redirects the user to this URI.
  • AssignmentsControlled access: The default access option assigns and grants login access to this new app integration for everyone in your Okta org or you can choose to Limit access to selected groups

Click Save. This action creates the app integration and opens the settings page to configure additional options.

The Client Credentials section has the Client ID and Client secret values for Cloud Director integration, Copy both the values as we enter these in Cloud Director.

The General Settings section has the Okta Domain, for Cloud Director integration, Copy this value as we enter these in Cloud Director.

Step:2 – Cloud Director OIDC Configuration

Now I am going to configure OIDC authentication for provider side of cloud provider and with very minor changes (tenant URL) it can be configured for tenants too.

Let’s go to Cloud Director and from the top navigation bar, select Administration and on the left panel, under Identity Providers, click OIDC and click CONFIGURE

General: Make sure that OpenID Connect  Status is active, and enter the client ID and client secret information from the OKTA App registration which we captured above.

To use the information from a well-known endpoint to automatically fill in the configuration information, turn on the Configuration Discovery toggle and enter a URL, for OKTA the URL would look this – https://<domain.okta.com>/.well-known/openid-configuration and click on NEXT

Endpoint: Clicking on NEXT will populate “Endpoint” information automatically, it is however, essential that the information is reviewed and confirmed. 

Scopes: VMware Cloud Director uses the scopes to authorize access to user details. When a client requests an access token, the scopes define the permissions that this token has to access user information.enter the scope information, and click Next.

Claims: You can use this section to map the information VMware Cloud Director gets from the user info endpoint to specific claims. The claims are strings for the field names in the VMware Cloud Director response

This is the most critical piece of configuration. Mapping of this information is essential for VCD to interpret the token/user information correctly during the login process.

For OKTA developer account, user name is email id, so i am mapping Subject to email as below

Key Configuration:

OIDC uses a public key cryptography mechanism.A private key is used by the OIDC provider to sign the JWT Token and it can be verified by a 3rd party using the public keys published on the OIDC provider’s well-known URL.These keys form the basis of security between the parties. For security to be maintained, this is required to keep the private keys protected from any cyber-attacks.One of the best practices that has been identified to secure the keys from being compromised is known as key rollover or key Refresh.

From VMware Cloud Director 10.3.2 and above, if you want VMware Cloud Director to automatically refresh the OIDC key configurations, turn on the Automatic Key Refresh toggle.

  • Key Refresh Endpoint should get populated automatically as we choose auto discovery.
  • Select a Key Refresh Strategy.
    • AddPreferred option, add the incoming set of keys to the existing set of keys. All keys in the merged set are valid and usable.
    • Replace – Replace the existing set of keys with the incoming set of keys.
    • Expire After – You can configure an overlap period between the existing and incoming sets of keys. You can configure the overlapping time using the Expire Key After Period, which you can set in hourly increments from 1 hour up to 1 day.

If you did not use Configuration Discovery in Step 6, upload the private key that the identity provider uses to sign its tokens and click on SAVE

Now go to Cloud Director, under Users, Click on IMPORT USERS and choose Source as “OIDC” and add user which is there in OKTA and Assign Role to that user, thats it.

Now you can logout from the vCD console and try to login again, Cloud Director automatically redirects to OKTA and asks for credential to validate.

Once the user is authenticated by Okta, they will be redirected back to VCD and granted access per rights associated with the role that was assigned when the user was provisioned.

Verify that the Last Run and the Last Successful Run are identical. The runs start at the beginning of the hour. The Last Run is the time stamp of the last key refresh attempt. The Last Successful Run is the time stamp of the last successful key refresh. If the time stamps are different, the automatic key refresh is failing and you can diagnose the problem by reviewing the audit events. (This is only applicable if Automatic Key Refresh is enabled. Otherwise, these values are meaningless)

Bring on your Own OIDC – Tenant Configuration

For tenant configuration, i have created a video, please take a look here, Tenant can bring their own OIDC and self service in cloud director tenant portal.

This concludes the OIDC configuration with VMware Cloud Director. I would like to Thank my colleague Ankit Shah, for his guidance and review of this document.

Tanzu Service on VMware Cloud on AWS – Installing Tanzu Application Platform

Featured

VMware Tanzu Application Platform is a modular, application detecting platform that provides a rich set of developer tools and a paved path to production to build and deploy software quickly and securely on any compliant public cloud or on-premises Kubernetes cluster.

Tanzu Application Platform delivers a superior developer experience for enterprises building and deploying cloud-native applications on Kubernetes. It enables application teams to get to production faster by automating source-to-production pipelines. It clearly defines the roles of developers and operators so they can work collaboratively and integrate their efforts.

Operations teams can create application scaffolding templates with built-in security and compliance guardrails, making those considerations mostly invisible to developers. Starting with the templates, developers turn source code into a container and get a URL to test their app in minutes.

Pre-requisite

  1. You should have created an account on Tanzu Network to download Tanzu Application Platform packages.
  2. Servers should have Network access to https://registry.tanzu.vmware.com
  3. A container image registry and access from K8s cluster, in my case i have installed “Harbor” with let’s encrypt certificate.
  4. Registry credentials with read and write access made available to Tanzu Application Platform to store images.
  5. Git repository for the Tanzu Application Platform GUI’s software catalogs, along with a token allowing read access.

Kubernetes cluster requirements

Installation requires Kubernetes cluster v1.20, v1.21, or v1.22 on Tanzu Kubernetes Grid Service on VMware Cloud on VMC as well as pod security policies must be configured so that Tanzu Application Platform controller pods can run as root. To set the pod security policies, run:

#kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated

Install Cluster Essentials for VMware Tanzu

The Cluster Essentials for VMware Tanzu package simplifies the process of installing the open-source Carvel tools on your cluster. It includes a script that uses the Carvel CLI tools to download and install the server-side components kapp-controller and secretgen-crontroller on the targeted cluster. Currently, only MacOS and Linux are supported for Cluster Essentials for VMware Tanzu.

  • Sign in to Tanzu Network.
  • Navigate to Cluster Essentials for VMware Tanzu on VMware Tanzu Network.
  • on Linux, download tanzu-cluster-essentials-linux-amd64-1.0.0.tgz.
  • Unpack the TAR file into the tanzu-cluster-essentials directory by running:
#mkdir $HOME/tanzu-cluster-essentials
#tar -xvf tanzu-cluster-essentials-linux-amd64-1.0.0.tgz -C $HOME/tanzu-cluster-essentials
  • Configure and run install.sh using below commands:
#export INSTALL_BUNDLE=registry.tanzu.vmware.com/tanzu-cluster-essentials/cluster-essentials-bundle@sha256:82dfaf70656b54dcba0d4def85ccae1578ff27054e7533d08320244af7fb0343
#export INSTALL_REGISTRY_HOSTNAME=registry.tanzu.vmware.com
#export INSTALL_REGISTRY_USERNAME=TANZU-NET-USER Name
#export INSTALL_REGISTRY_PASSWORD=TANZU-NET-USER PASSWORD
#cd $HOME/tanzu-cluster-essentials
#./install.sh

now Install kapp & imgpkg CLI onto your $PATH using below commands:

sudo cp $HOME/tanzu-cluster-essentials/kapp /usr/local/bin/kapp
sudo cp $HOME/tanzu-cluster-essentials/imgpkg /usr/local/bin/imgpkg

For Linux Client VM: Install the Tanzu CLI and Plugins

To install the Tanzu Tanzu command line interface (CLI) on a Linux operating system, Create a directory named Tanzu and download tanzu-framework-bundle-linux from Tanzu Net and unpack the TAR file into the Tanzu directory and install using below commands:

#mkdir $HOME/tanzu 
#tar -xvf tanzu-framework-linux-amd64.tar -C $HOME/tanzu
#export TANZU_CLI_NO_INIT=true
#cd $HOME/tanzu 
#sudo install cli/core/v0.11.1/tanzu-core-linux_amd64 /usr/local/bin/tanzu
#tanzu version
#cd $HOME/tanzu
#tanzu plugin install --local cli all
#tanzu plugin list

Ensure that you have the acceleratorappspackagesecret, and services plug-ins installed. You need these plug-ins to install and interact with the Tanzu Application Platform.

Installing the Tanzu Application Platform Package and Profiles

VMware recommends install of Tanzu Application Platform packages by relocating the images to your registry from VMware Tanzu Network registry, this will ease the deployment process, so lets do it by logging in to Tanzu Net Registry, setting some env variables and relocate images.

#docker login registry.tanzu.vmware.com
#export INSTALL_REGISTRY_USERNAME=MY-REGISTRY-USER
#export INSTALL_REGISTRY_PASSWORD=MY-REGISTRY-PASSWORD
#export INSTALL_REGISTRY_HOSTNAME=MY-REGISTRY
#export TAP_VERSION=VERSION-NUMBER
#imgpkg copy -b registry.tanzu.vmware.com/tanzu-application-platform/tap-packages:1.0.2 --to-repo ${INSTALL_REGISTRY_HOSTNAME}/TARGET-REPOSITORY/tap-packages

This completes the download and upload on images to local registry.

Create a registry secret by running below command:

#tanzu secret registry add tap-registry \
  --username ${INSTALL_REGISTRY_USERNAME} --password ${INSTALL_REGISTRY_PASSWORD} \
  --server ${INSTALL_REGISTRY_HOSTNAME} \
  --export-to-all-namespaces --yes --namespace tap-install

Add the Tanzu Application Platform package repository to the cluster by running:

#tanzu package repository add tanzu-tap-repository \
  --url ${INSTALL_REGISTRY_HOSTNAME}/TARGET-REPOSITORY/tap-packages:$TAP_VERSION \
  --namespace tap-install

Get the status of the Tanzu Application Platform package repository, and ensure the status updates to Reconcile succeeded by running:

#tanzu package repository get tanzu-tap-repository --namespace tap-install

Tanzu Application Platform profile

The tap.tanzu.vmware.com package installs predefined sets of packages based on your profile settings. This is done by using the package manager you installed using Tanzu Cluster Essentials.Here is my full profile sample file:

buildservice:
  descriptor_name: full
  enable_automatic_dependency_updates: true
  kp_default_repository: harbor.tkgsvmc.net/tbs/build-service
  kp_default_repository_password: <password>
  kp_default_repository_username: admin
  tanzunet_password: <password>
  tanzunet_username: tripathiavni@vmware.com
ceip_policy_disclosed: true
cnrs:
  domain_name: tap01.tkgsvmc.net
grype:
  namespace: default
  targetImagePullSecret: tap-registry
learningcenter:
  ingressDomain: learningcenter.tkgsvmc.net
metadata_store:
  app_service_type: LoadBalancer
ootb_supply_chain_basic:
  gitops:
    ssh_secret: ""
  registry:
    repository: tap
    server: harbor.tkgsvmc.net/tap
profile: full
supply_chain: basic
tap_gui:
  app_config:
    app:
      baseUrl: http://tap-gui.tap01.tkgsvmc.net
    backend:
      baseUrl: http://tap-gui.tap01.tkgsvmc.net
      cors:
        origin: http://tap-gui.tap01.tkgsvmc.net
    catalog:
      locations:
        - target: https://github.com/avnish80/tap/blob/main/catalog-info.yaml
          type: url
  ingressDomain: tap01.tkgsvmc.net
  ingressEnabled: "true"
  service_type: LoadBalancer

save this file with modified values as per your environment, for more details about details of settings, check Here.

Install Tanzu Application Platform

finally lets install TAP, to install the Tanzu Application Platform package run below commands:

#tanzu package install tap -p tap.tanzu.vmware.com -v $TAP_VERSION --values-file tap-values.yml -n tap-install

to verify the packages installed, you can go to TMC and check there

or you an run below command to verify too

#tanzu package installed get tap -n tap-install

This completes the installation of Tanzu Application platform, now developer can: Develop and promote an application, Create an application accelerator, Add testing and security scanning to an application, Administer, set up, and manage supply chains.

Tanzu Service on VMware Cloud on AWS – Kubernetes Cluster Operations

Featured

Tanzu Kubernetes Grid is a managed service offered by VMware Cloud on AWS. Activate Tanzu Kubernetes Grid in one or more SDDC clusters to configure Tanzu support in the SDDC vCenter Server.In my previous post (Getting Started with Tanzu Service on VMware Cloud on AWS),in this i walked you through how to enable Tanzu Service on VMware Cloud on AWS.

In this post i will deploy Tanzu Kubernetes Cluster by GUI (from Tanzu Mission Control) and as well as CLI but this CLI is updated API V2 version, so lets get started.

Deploy Tanzu Kubernetes Cluster using Tanzu Mission Control

Go to Tanzu Mission Control and validate that VMC supervisor cluster is registered and healthy by going to Tanzu Mission Control, Click on Administration, to go “management cluster” and check the status

Now on Tanzu Mission Control, click on “Clusters” and then click on “CREATE CLUSTER”

Select your VMC Tanzu Management Cluster and click on “CONTINUE TO CREATE CLUSTER”

on the next screen choose “Provisioner” (namespace name”). you add a provisioner by creating a vSphere namespace in the Supervisor Cluster, which you can do in VMC vCenter.

Next is select Kubernetes Version, latest supported version is preselected for you, Pod CIDR, and Service CIDR. You can also optionally select the default storage class for the cluster and allowed storage classes.The list of storage classes that you can choose from is taken from your vSphere namespace.

Select the type of cluster you want to create. the primary difference between the two is that the highly available cluster is deployed with multiple control plane nodes.

You can optionally select a different instance type for the cluster’s control plane node and its storage class as well as you can optionally additional storage volumes for your control plane.

To configure additional volumes, click Add Volume and then specify the name, mount path, and capacity for the volume. To add another, click Add Volume again.

Next is you can define the default node pool and create additional node pools for your cluster. specify the number of worker nodes to provision also select the instance type for workload clusters and select the storage class

When you ready to provision the new cluster, click Create Cluster and wait for few minutes

you can also view vCenter activities about creation of Tanzu Kubernetes cluster.

Once the cluster is fully created and TMC agent reported back, you should see below status on TMC console, which shows that cluster has been successfully created.

This complates Tanzu Kubernetes Cluster deployment using GUI.

Deploy Tanzu Kubernetes Grid Service using v1alpha2 API yaml

The Tanzu Kubernetes Grid Service v1alpha2 API provides a robust set of enhancements for provisioning Tanzu Kubernetes clusters. there is an YAML specification which i am using for provisioning a Tanzu Kubernetes Cluster Using the Tanzu Kubernetes Grid Service v1alpha2 API

apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tkgsv2
  namespace: wwmca
spec:
  topology:
    controlPlane:
      replicas: 1
      vmClass: guaranteed-medium
      storageClass: vmc-workload-storage-policy-cluster-1
      volumes:
        - name: etcd
          mountPath: /var/lib/etcd
          capacity:
            storage: 4Gi
      tkr:  
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
    nodePools:
    - name: worker-nodepool-a1
      replicas: 2
      vmClass: best-effort-large
      storageClass: vmc-workload-storage-policy-cluster-1
      tkr:  
        reference:
          name: v1.21.2---vmware.1-tkg.1.ee25d55
  settings:
    storage:
      defaultClass: vmc-workload-storage-policy-cluster-1
    network:
      cni:
        name: antrea
      services:
        cidrBlocks: ["198.53.100.0/16"]
      pods:
        cidrBlocks: ["192.0.5.0/16"]
      serviceDomain: managedcluster.local
      trust:
        additionalTrustedCAs:
          - name: CompanyInternalCA-1
            data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlG

Two key parameters which i am using for cluster provistioning

  • #tkr.reference.name is the TKR NAME #to be used by control plane nodes; supported format is “v1.21.2—vmware.1-tkg.1.ee25d55”
  • #trust configures additional certificates for the cluster #if omitted no additional certificate is configured

You can run below command to check the status of cluster provustioning:

#kubectl get tkc

Scale a Tanzu Kubernetes cluster

Publish the service Internally/Externally

Before we can make our service available over the Internet, it should be accessible from within the VMware Cloud on AWS instance. Platform operators can publish applications through a Kubernetes Service of type LoadBalancer. This ability is made possible through the NSX-T Container Plugin (NCP) functionality built into Tanzu Kubernetes Grid. lets deploy a basic container and exposed it as type “LoadBalancer”

#kubectl run nginx1 --image=nginx
#kubectl expose pod nginx1 --type=LoadBalancer --port=80

Now you can access the application internally by accessing internal

Access application from Internet

To make it publicly available, we must assign a public IP address, and configure a Destination NAT, let do it request an Public IP on VMC console and create a NAT rule on Internet Tab to access the application from internet.

Now access the application from Internet and you should be able to successfully access it using provided public ip.

Exposing a Kubernetes service to the Internet takes a couple of more steps to complete than exposing it to your internal networks, but the VMware Cloud Console makes those steps simple enough. After exposing the Kubernetes service using an NSX-T Load Balancer, you can request a new Public IP Address and then configure a NAT rule to send that traffic to the virtual IP address of the load balancer.

Run Data Platform in Minutes on VMware Cloud Director

Featured

Enterprise applications increasingly rely on large amounts of data, that needs be distributed, processed, and stored. Open source and commercial supported software stacks are available to implement a data platform, that can offer common data management services, accelerating the development and deployment of data hungry business applications, VMware has made it simple for cloud providers to offer, deploy and manage using Data Platform Blueprint.

Understand Validated Blueprint and Requirement for Data Platform

You can find validated blueprint designs in the Bitnami Application Catalog and VMware Marketplace, including blueprints for building containerized data platforms with Kafka, Apache Spark, Solr, and Elasticsearch.

These engineered and tested data platform blueprints are implemented via Helm charts. They capture security and resource settings, affinity placement parameters, and observability endpoint configurations for data software runtimes. Using the Helm CLI or KubeApps tool, Helm charts enable the single-step, production-ready deployment of a data platform in a Kubernetes cluster, covering automated installation and the configuration of multiple containerized data software runtimes.

Each data platform blueprint comes with Kubernetes cluster node and resource configuration guidelines to ensure the optimized sizing and utilization of underlying Kubernetes cluster compute, memory, and storage resources. For example, README.md covers the Kubernetes deployment guidelines for the Kafka, Apache, Spark, and Solr blueprint.

This Blueprint enables the fully automated Kubernetes deployment of such multi-stack data platform, covering the following software components:

  • Apache Kafka – Data distribution bus with buffering capabilities
  • Apache Spark – In-memory data analytics
  • Solr – Data persistence and search
  • Data Platform Signature State Controller – Kubernetes controller that emits data platform health and state metrics in Prometheus format.

These containerized stateful software stacks are deployed in multi-node cluster configurations, which is defined by the Helm chart blueprint for this data platform deployment, covering:

  • Pod placement rules – Affinity rules to ensure placement diversity to prevent single point of failures and optimize load distribution
  • Pod resource sizing rules – Optimized Pod and JVM sizing settings for optimal performance and efficient resource usage
  • Default settings to ensure Pod access security

Cloud Director Provider Configuration

Install and Configure VMware Cloud Director App Launchpad

App Launchpad is a VMware Cloud Director service extension which service providers can use to create and publish catalogs of deployment-ready applications. Tenant users can then deploy the applications with a single click, for App Launch Pad Install and Configure see Here , once App Launchpad is installed, configure it with Bitnami helm repository as below:

  • Log in to the VMware Cloud Director service provider admin portal.
  • From the main menu (), select App Launchpad
  • On the Settings tab, click on Helm Chart Repository
  • Click Add.
  • Add the required repository details.

Add DataPlatform Blueprint from Helm Chart Repository

  1. Log in to the VMware Cloud Director service provider admin portal.
  2. From the main menu (), select App Launchpad.
  3. On the Applications tab, click Add New.
  4. Select Chart Repository as the application source.
  5. Select the chart repository from which you want to import applications and click Next.
  6. Select the application and application version that you want to add and click Next.You can add multiple applications at once.
  7. Select an existing VMware Cloud Director catalog to which you add the application or create one, and click Next.
  8. Review the applications details and click Add.

Tenant Self-Service Deployment

Once Provider has published Data Platfrom blue prints to tenants, tenants can deploy those on Tanzu Kubernetes Cluster in self service way. so before deploying tenant must need to:

  • Create Tanzu Kubernetes Cluster with enough CPU and Memory to master and worker nodes, for this blog i created a four node worker cluster with 4 vCPU and 16 GB Memory.

Below are the minimum Kubernetes Cluster requirements for “Small” size data platform:

Data Platform SizeKubernetes Cluster SizeUsage
Small1 Master Node (2 CPU, 4Gi Memory)
3 Worker Nodes (4 CPU, 32Gi Memory)
Data and application evaluation, development, and functional testing
  • Create Default Storage Class – Once Tanzu Kubernetes Cluster created, create an default stoage class for Tanzu Kubernetes cluster using below sample yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  <ensure true>
  name: vcd-disk-dev
provisioner: named-disk.csi.cloud-director.vmware.com
reclaimPolicy: Delete
parameters:
  storageProfile: "Tanzu01"  <your org VDC stroage policy>
  filesystem: "ext4"
  • Tenant Deploys Data Platform BluePrint – Now Tenant goes ahead in Cloud Director App launchpad and deploys Data Platform Blueprint using their choice of settings or with default settings
  1. Select DataPlatform Blue Print and click on Deploy
  2. Enter Application Name
  3. Select Tanzu Kubernetes Cluster on which tenant want to install data platform
  4. Click on “Launch Application”
  • This blue print bootstraps Data Platform Blueprint-1 deployment on a Kubernetes cluster using the Helm package manager.Once the chart is installed, the deployed data platform cluster comprises of:
    • Zookeeper with 3 nodes to be used for both Kafka and Solr
    • Kafka with 3 nodes using the zookeeper deployed above
    • Solr with 2 nodes using the zookeeper deployed above
    • Spark with 1 Master and 2 worker nodes
    • Data Platform Metrics emitter and Prometheus exporter

this process will also create required persistent volumes for the application, you can view the persistent volumes inside cloud director console, by going in to Tanzu Kubernetes cluster

or by going in to Organization VDC and click on “Named Disks”

The entire process takes some time, once done tenant should see all the pods are up and running, all the required volumes are created and attached and all the required services are exposed.

Testing the Kafka Cluster

(I am not Kafka expert took testing guidance from Internet, specially Platform9 website)

We are going to deploy a test client that will execute scripts against the Kafka cluster.Create and apply the following deployment:


$ vi testclient.yaml

apiVersion: v1
kind: Pod
metadata:
  name: testclient
  namespace: kafka
spec:
  containers:
  - name: kafka
    image: solsson/kafka:0.11.0.0
    command:
      - sh
      - -c
      - "exec tail -f /dev/null"

$ kubectl apply -f testclient.yaml

now lets use this “testclient" container, we will create the first topic, which we are going to use to post messages:

$ kubectl --kubeconfig kubeconfig-k8sdata -n 7f55bcb2-75f5-42db-b2a2-7c18e8ba5011 exec -ti testclient -- ./bin/kafka-topics.sh --zookeeper dp02-zookeeper:2181 --topic messages --create --partitions 1 --replication-factor 1

make sure you use the correct hostname for zookeeper cluster and the topic configuration. now lets verify that topic exists by using below command:

$  kubectl --kubeconfig kubeconfig-k8sdata -n 7f55bcb2-75f5-42db-b2a2-7c18e8ba5011 exec -ti testclient -- ./bin/kafka-topics.sh --zookeeper dp02-zookeeper:2181 --list

Now we can create one consumer and one producer instance so that we can send and consume messages. Open two putty shells and on first shell create consumer:

$ kubectl --kubeconfig kubeconfig-k8sdata -n 7f55bcb2-75f5-42db-b2a2-7c18e8ba5011 exec -ti testclient -- ./bin/kafka-console-consumer.sh --bootstrap-server dp02-kafka:9092 --topic messages --from-beginning

on second shell, create producer and start sending messages:

$kubectl --kubeconfig kubeconfig-k8sdata -n 7f55bcb2-75f5-42db-b2a2-7c18e8ba5011 exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list dp02-kafka:9092 --topic messages

>Hi
>How are you ?

On consumer shell, you should see these messages getting populated using data streaming platform.

Cloud Director with Container Service Extention along with App Launchpad offers easist way for providers to offer many monitizable services in multi-tenant environment and easiest way to deploy and consume these services for tenants. so providers what are you waiting for ?

Tanzu on Azure Native

Featured

VMware Tanzu Kubernetes Grid provides organizations with a consistent, upstream-compatible, regional Kubernetes substrate that is ready for end-user workloads and ecosystem integrations. You can deploy Tanzu Kubernetes Grid across software-defined datacenters (SDDC) and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2. in this blog post we will deploy Tanzu Kubernetes Grid 1.4 to Azure Native VMs.

Pre-requisite

  • Deploy a Client VM ,my case ubuntu VM, this VM will be the bootstrap VM for Tanzu, from where we will deploy management cluster in Azure, you can have one native Azure VM for this.
  • TKG uses a local Docker install to setup a small, ephemeral, temporary kind based Kubernetes cluster to build the TKG management cluster in Azure. you need Docker locally to run the kind cluster.
  • Download and Unpack the “Tanzu CLI” and “Kubectl” from HERE, on above VM in a new directory named "tkg" or "tanzu"
    • unpack using #> tar -xvf tanzu-cli-bundle-v1.4.0-linux-amd64.tar
    • After you unpack the bundle file, in your folder, you will see a cli folder with multiple subfolders and files
    • Install Tanzu CLI using #> sudo install core/v1.4.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
  • Unpack the kubectl binary using:
    • #> tar -xvf kubectl-linux-v1.21.2+vmware.1.gz
    • Install kubectl using #> sudo install kubectl-linux-v1.21.2+vmware.1 /usr/local/bin/kubectl
  • Run the following command from the tanzu directory to install all the Tanzu plugins:
    • #> tanzu plugin install –local cli all
    • #> tanzu plugin list

Configure Azure resources

In this section we will prepare Microsoft Azure for running Tanzu Kubernetes Grid, For the networking, i have prepared azure network as below:

  • Every Tanzu Kubernetes Grid cluster requires 2 Public IP addresses
  •  For each Kubernetes Service object with type LoadBalancer, 1 Public IP address is required.
  • A VNET with: (only required if using existing VNET else TKG can create all automatically)
    • A subnet for the management cluster control plane node
    • A Network Security Group on the control plane subnet with Allow TCP over port 22 and 6443 for any source and destination inbound security rules, to enable SSH and Kubernetes API server connections
    • One additional subnet and Network Security Group for the management cluster worker nodes.
  • Below is the high level network topology will look like when we deploy Tanzu Management Cluster:

Get Tenant ID

Make a note of the Tenant ID value as it will be used later by hovering over your account name at upper-right, or else browse to Azure Active Directory > <Your Org> > Properties > Tenant ID

Register Tanzu Kubernetes Grid as an Azure Client App & Get Application (Client) ID

Tanzu Kubernetes Grid manages Azure resources as a registered client application that accesses Azure through a service principal account. Below steps register your Tanzu Kubernetes Grid application with Azure Active Directory, create its account, create a client secret for authenticating communications, and record information needed later to deploy a management cluster.

  • Go to Active Directory > App registrations and click on + New registration.
  • Enter name and select who else can use it. leave Redirect URI (optional) field blank.
  • Click Register. This registers the application with an Azure service principal account

Make a note of the Application (client) ID value, we will use it later.

Get Subscription ID

From the Azure Portal top level, browse to Subscriptions. At the bottom of the pane, select one of the subscriptions you have access to, and make a note of Subscription ID.

Add a Role, Create and Record Secret ID

Click the subscription listing to open its overview pane and Select to Access control (IAM) and click Add a role assignment.

  • In the Add role assignment pane
    • Select the Owner role
    • Select to “user, group, or service principal
    • Under Select enter the name of your app, in my case “avnish-tkg”. It appears under Selected Members
  • Click Save. A popup appears confirming that your app was added as an owner for your subscription. You can also verify by going in to “Owned Application” section.
  • On the Azure Portal go to Azure Active Directory, click on App Registrations, select your “avnish-tkg” app under Owned applications.
  • Go to Certificates & secrets then in Client secrets click on New client secret.
  • In the Add a client secret popup, enter a Description, choose an expiration period, and click Add.
  • Azure lists the new secret with its generated value under Client Secrets. take a note of “Client Secret” value, which we will use later.

With All above steps, we will have four recorded values:

Subscription ID – XXXXXXX-XXX-4853-9cff-3d2d25758b70
Application Client ID – XXXXXXXX-6134-xxxx-b1a9-8fcbfd3ea189
Secret Value – XXXXX-xxxxxxxxxxxxxxxkB3VdcBF.c_C.
Tenant ID – XXXXXXXX-3cee-4b4a-a4d6-xxxxxdd62f0

we will use these values when we will create management cluster.

Create an SSH Key-Pair

To connect to management azure machine we must need to provide the public key part of an SSH key pair. you can use a tool "ssh-keygen" to generate one

  • #>ssh-keygen -t rsa -b 4096 -C “email@example.com
  • At the prompt Enter file in which to save the key (/root/.ssh/id_rsa): press Enter to accept the default
  • Enter and repeat a password for the key pair

Copy the content of .ssh/id_rsa.pub, which we will use in next section.

Accept Base VM image license

To run management cluster VMs on Azure, accept the license for their base Kubernetes version and machine OS by logging to azure cell and run below commands:

#> az login --service-principal --username AZURE_CLIENT_ID --password AZURE_CLIENT_SECRET --tenant AZURE_TENANT_ID

AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID are your avnish-tkg app's client ID and secret and your tenant ID, as recorded above

#> az vm image terms accept --publisher vmware-inc --offer tkg-capi --plan k8s-1dot21dot2-ubuntu-2004 --subscription AZURE_SUBSCRIPTION_ID

In Tanzu Kubernetes Grid v1.4.0, the default cluster image --plan value is k8s-1dot21dot2-ubuntu-2004.

Start the Installer Interface

On the machine on which you downloaded and installed the Tanzu CLI, run the

#> tanzu management-cluster create -b "IP of this machine:port" -u

b = binding with interface
u = for User Interface

On the local machine, browse to the above machine’s IP address to access the installer interface and then choose “Microsoft Azure” and click on “DEPLOY”

In the IaaS Provider section, enter the Tenant IDClient IDClient Secret, and Subscription ID values for your Azure account. we recorded these values in above pre-requisite section.

  • Click Connect. The installer verifies the connection and changes the button label to Connected.
  • Select the Azure region in which to deploy the management cluster.
  • Paste the contents of your SSH public key, ".ssh/id_rsa.pub“, into the text box.
  • Under Resource Group, select either the Select an existing resource group or the Create a new resource group radio button.

In the VNET for Azure section, select either the Create a new VNET on Azure or the Select an existing VNET radio button, in my case i am using existing VNET and Subnets which i had provisioned already.

To make the management cluster private, enable the Private Azure Cluster checkbox or leave it Untick. By default, Azure management and workload clusters are public. But you can also configure them to be private, which means their API server uses an Azure internal load balancer (ILB) and is therefore only accessible from within the cluster’s own VNET or peered VNETs.

In Management Cluster Settings section choose Development or Production tiles, use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs.

Under Worker Node Instance Type, select the configuration for the worker node VM

In the optional Metadata section, optionally provide descriptive information about this management cluster.

in Kubernetes Network section check the default Cluster Service CIDR and Cluster Pod CIDR ranges. If these CIDR ranges of 100.64.0.0/13 and 100.96.0.0/11 are not available, change the values under Cluster Service CIDR and Cluster Pod CIDR.

In Identity Management section, enable/disable Enable Identity Management Settings based on your use case.

In OS Image section, select the OS and Kubernetes version image template to use for deploying Tanzu Kubernetes Grid VMs, and click Next

In the Registration URL field, copy and paste the registration URL you obtained from Tanzu Mission Control or if you don’t have TMC access or URL, move next you can register it later if you want.

In the CEIP Participation section, optionally deselect the check box to opt out of the VMware Customer Experience Improvement Program and then Click Review Configuration to see the details of the management cluster that we have configured.

and finally click on Deploy Management Cluster.

Deployment of the management cluster can take several minutes, in my case around 12 minutes.

and finally, once everything is deployed and configured. your “Management Cluster Created”

The installer saves the configuration of the management cluster to ~/.config/tanzu/tkg/clusterconfigs with a generated filename of the form UNIQUE-ID.yaml. After the deployment has completed, you can rename the configuration file to something memorable, 

if you go to azure portal, you should see three control plane VMs, for with names similar to CLUSTER-NAME-control-plane-abcde, one or three worker node VMs with name similar to CLUSTER-NAME-md-0-rh7xv, Disk and Network Interface resources for the control plane and worker node VMs, with names based on the same name patterns.

At this point we can see management cluster deployed successfully, now we can go ahead and create workload clusters based on your requirement easily. Tanzu allows to deploy and manage Kubernetes cluster on vSphere, on VMware on public clouds and native public cloud AWS and Azure native, in next post i will deploy Tanzu on AWS.

Code to Container with Tanzu Build Service

Featured

Tanzu Build Service uses the open-source Cloud Native Buildpacks project to turn application source code into container images. Build Service executes reproducible builds that align with modern container standards, and additionally keeps images up-to-date. It does so by leveraging Kubernetes infrastructure with kpack, a Cloud Native Buildpacks Platform, to orchestrate the image lifecycle. Tanzu Build Service helps customers develop and automate containerized software workflows securely and at scale.

In this post Tanzu Build Service will monitor git branch and automatically build containers with every push. Then it will upload that container to your image registry for you to pull down and run locally or on Kubernetes cluster

Tanzu Build Service Installation Pre-requisite

  • Before we install Tanzu Build Service, you must ensure we have a Kubernetes cluster and ensure that all worker nodes have at least 50 GB of ephemeral storage allocated to them and also ensure your Kubernetes cluster is configured with default StorageClass

To do this on Tanzu Kubernetes Grid Service, mount a 60GB volume at /var/lib to the worker nodes in the TanzuKubernetesCluster resource that corresponds to your TKGs cluster. I have used below yaml content for mounting volumes while creating Tanzu Kubernetes cluster.

storage:
      classes:
      - tanzu01
      - tkgontkgs
      defaultClass: tkgontkgs
  topology:
    controlPlane:
      class: best-effort-medium
      count: 1
      storageClass: tkgontkgs
      volumes:
      - capacity:
          storage: 60Gi
        mountPath: /var/lib

    workers:
      class: best-effort-medium
      count: 3
      storageClass: tkgontkgs
      volumes:
      - capacity:
          storage: 60Gi
        mountPath: /var/lib
  • Ensure you have access to an existing container registry or install one using this guidance, this will be used to install Tanzu Build Service and store the application images that will be created by build process.
  • Also on a client VM from where you will connect to your Kubernetes cluster, install “docker cli” as well as install below tools , if you need guidance to install these tools, check Here
  • Install kapp, this is a deployment tool that allows users to manage Kubernetes resources in bulk.
  • Install ytt, this is a templating tool that understands YAML structure.
# wget -O ytt https://github.com/vmware-tanzu/carvel-ytt/releases/download/ v0.35.1/ytt-linux-amd64
#chmod +x ytt
#mv ytt /usr/local/bin/ytt
  • Install kbld,this is tool that builds, pushes, and relocates container images.
#wget -O kbld https://github.com/vmware-tanzu/carvel-kbld/releases/downloa d/v0.30.0/kbld-linux-amd64
#mv kbld /usr/local/bin/kbld
#chmod +x /usr/local/bin/kbld
  • Install kp, it controls the kpack installation on Kubernetes, download the kp CLI for your operating system from the Tanzu Build Service page on Tanzu Network.
#mv kp-linux-0.3.1 /usr/local/bin/kp
#chmod +x /usr/local/bin/kp
  • Install imgpkg, it is a tool that relocates container images and pulls the release configuration files.
#wget -O imgpkg https://github.com/vmware-tanzu/carvel-imgpkg/releases/dow nload/v0.17.0/imgpkg-linux-amd64
#mv imgpkg /usr/local/bin/imgpkg
#chmod +x /usr/local/bin/imgpkg
  • and finally target the Kubernetes Cluster on which you want to install “Tanzu Build Service” using :
#kubectl config use-context <context-name> 

Relocate Images to private Registry

First we need to relocate images from the Tanzu Network registry to an internal image registry and for that login to the image registry where you want to store the images by running:

#>docker login harbor.tanzu.zpod.io - 

<harbor.tanzu.zpod.io> - this is my private registry host name 

Now login to the Tanzu Network registry with your Tanzu Network credentials:

#>docker login registry.pivotal.io

Now lets relocate the images to your local registry using “imgpkg” command:

#>imgpkg copy -b "registry.pivotal.io/build-service/bundle:1.2.2" --to-repo harbor.tanzu.zpod.io/tbs/build-service

This completes image relocation process, now lets move to installation.

Tanzu Build Service Installation

Pull the Tanzu Build Service bundle image on your client vm from your internal registry using imgpkg:

#>imgpkg pull -b  "harbor.tanzu.zpod.io/library/build-service:1.2.2" -o /tmp/bundle

Use the Carvel tools kappytt, and kbld, (those we installed in pre-requisite section) to install Build Service and define the required Build Service parameters by running:

#>ytt -f /tmp/bundle/values.yaml -f /tmp/bundle/config/ -f /tmp/ca.crt -v docker_repository='harbor.tanzu.zpod.io/tbs/build-service'     -v docker_username='admin' -v docker_password='<password>' -v tanzunet_username='tripathiavni@vmware.com' -v tanzunet_password='<password>'| kbld -f /tmp/bundle/.imgpkg/images.yml -f- | kapp deploy -a tanzu-build-service -f- -y




#/tmp/ca.crt:-path to registry CA certificate
#docker_repository:-image repository where TBS images exist
 

/tmp/ca.crt – is my CA certificate of the registry.if all above steps are correct, then you should see the “succeeded” as below.

that means, TBS installation is complete, lets move to next

Import Tanzu Build Service Dependency Resources

The Tanzu Build Service Dependencies like stacks, Buildpacks, Builders, etc. are used to build applications and keep them patched and These must be imported with the kp cli and the Dependency Descriptor (descriptor-<version>.yaml) file from the Tanzu Build Service Dependencies page

and now run below command to import all the dependencies

#>kp import -f /tmp/descriptor-100.0.146.yaml --registry-ca-cert-path /tmp/CA.cer

Verify Installation

To verify Build Service installation, lets target the Kubernetes Cluster where Tanzu Build Service has been installed on and run “kp” command which we installed as part of pre-req.

List the cluster builders available in your installation using:

#> kp clusterbuilder list

you should see output like below.

Few Additional commands, you can also run

#> kp clusterstack list
#> kp clusterstore list

This completes the installation of Tanzu build Service.

Build and Deploy a Sample App

First lets create a “secret” for “gitlab” as i have installed gitlab in to my vSphere Lab environment

#> kp secret create github-creds --git http://10.96.63.48 --git-user demoadmin -n tbs-demo

Lets also create secret for the private registry, in my case my registry is hosted on “harbor.tanzu.zpod.io” in vSphere environment

#> kp secret create my-registry-creds --registry harbor.tanzu.zpod.io --registry-user admin --namespace tbs-demo

With the next command, we are telling Tanzu Build Service where to retrieve the source code. Tanzu Build Service will be configured to watch the master branch by default, but you can configure it to watch your own development branch for whatever feature or bug you happen to be working on. Finally, the tag is where the image will be pushed in your registry. Lets create an image using source code from my git repository by running:

#>kp image create spring-petclinic --tag harbor.tanzu.zpod.io/library/sp                                                    ring-petclinic:latest --namespace tbs-demo --git http://10.96.63.48/demoadmin/wwcp.get

it will download its dependencies and start building image

Once completed, Tanzu Build Service will put a copy of the image into Harbor registry, as well as onto your local Kubernetes cluster within the default namespace.

You can check the image status by running below command.

We can now deploy this image either on local or Kubernetes environment or you can setup continuous deployment to deploy build images on any Kubernetes platform. in below example I am installing on my Tanzu Kubernetes cluster from the private registry where build service pushed the container image.

As you can see once image is deployed, i can access it easily from my browser successfully.

This completes the Step-by-Step procedure to install and use Tanzu Build Service. If you would like to dive deeper into VMware Tanzu Build Service, check out the documentation section.

Deploy Tanzu Kubernetes Clusters using Tanzu Mission Control

Featured

VMware Tanzu Mission Control is a centralized management platform for consistently operating and securing your Kubernetes infrastructure and modern applications across multiple teams and clouds.

TMC is Available through VMware Cloud services, Tanzu Mission Control provides operators with a single control point to give developers the independence they need to drive business forward, while ensuring consistent management and operations across environments for increased security and governance.

Use Tanzu Mission Control to manage your entire Kubernetes footprint, regardless of where your clusters reside.

Getting Started with Tanzu Mission Control

To get Started with Tanzu Mission Control, use VMware Cloud Services tools to gain access to VMware Tanzu Mission Control

Launch the TMC Console – Log in to the Tanzu Mission Control console to start managing clusters

Create a Cluster Group – Create a cluster group to help organize and manage your clusters

Register TKGs Management Cluster – When customer registers a Tanzu Kubernetes Grids management cluster, you can bring all of its workload clusters under the management of Tanzu Mission Control, which helps customer to facilitate consistent management using all of the capabilities of Tanzu Mission Control, as well as provisioning resources and creating new clusters directly from Tanzu Mission Control.

Once customer has access to Tanzu Mission Control and created cluster group and registered management cluster, follow below video to deploy Kubernetes clusters on vCenter using management cluster. Video has step-by-Step instruction to help customers for their TMC journey.

Tanzu mission control help customers to bring all the K8s clusters together and once together you can manage policies and configuration of these clusters and help developers to self services.

Now Tanzu Mission Control is available as VMware Cloud Provider Partners as a Software-as-a-Service (SaaS) offering through the VMware Cloud Partner Navigator, this unlocks new opportunities for cloud providers to offer Kubernetes (K8s) managed services for a multi-cloud and multi-team environment. For more details check HERE or reach out to your Business Development Manager.

Deploy Harbor Registry on TKG Clusters

Featured

Tanzu Kubernetes Grid Service, informally known as TKGS, lets you create and operate Tanzu Kubernetes clusters natively in vSphere with Tanzu. You use the Kubernetes CLI to invoke the Tanzu Kubernetes Grid Service and provision and manage Tanzu Kubernetes clusters. The Kubernetes clusters provisioned by the service are fully conformant, so you can deploy all types of Kubernetes workloads you would expect. vSphere with Tanzu leverages many reliable vSphere features to improve the Kubernetes experience, including vCenter SSO, the Content Library for Kubernetes software distributions, vSphere networking, vSphere storage, vSphere HA and DRS, and vSphere security.

Harbor is an open source, trusted, cloud native container registry that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity control and management. so lets go ahead and deploy harbor.I have already provisioned an TKG cluster and you can login to TKG cluster by using below command:

#kubectl vsphere login --server=<supervisor-cluster-ip> --tanzu-kubernetes-cluster=<namespace-name> --tanzu-kubernetes-cluster-name=<cluster-name>

Set the correct context as you might have many clusters by using below command:

#kubectl config use-context <cluster-name01>

Add Harbor Helm repository

Now lets install Harbor, you can use various Helm repositories.

Harbor –  https://github.com/goharbor/harbor-helm or also the one from

Bitnami –  https://github.com/bitnami/charts/tree/master/bitnami/harbor which I’m going to use.

Add the repository of your choice to your client…

#helm repo add harbor https://helm.goharbor.io
#helm repo add bitnami https://charts.bitnami.com/bitnami

…and update Helm subsequently.

#helm repo update

Installing Harbor

We will deploy Harbor in a new Kubernetes Namespace which we will name “tanzu-system-registry”. Create the Namespace with kubectl create ns harbor and start the deployment process by executing the following helm command with some corresponding options:

helm install harbor bitnami/harbor \
--set harborAdminPassword=admin \
--set global.storageClass=tkgontkgs \
--set service.type=LoadBalancer \
--set externalURL=harbor.tanzu.zpod.io \
--set service.tls.commonName=harbor.tanzu.zpod.io \
-n tanzu-system-registry

Go and check the pods status by using this command:

#kubectl get pods -n tanzu-system-registry

lets check the services running inside “tanzu-system-registry” namespace, this will give us external IP of the service.

#kubectl get svc -n tanzu-system-registry

Above command will give us an “External IP” which got auto configured in NSX-T, Lets browse using external IP using user name as “admin” and password which we set in the helm command

Now we can successfully browse and access the registry successfully

You can push images to the Harbor registry to make them available to all clusters that are running in the Tanzu Kubernetes Grid instance. for me this i have deployed for my “Tanzu Build Service” installation as TBS needs registry as pre-requisite.

Integrate Azure Files with Azure VMware Solution

Featured

Azure VMware Solution is a VMware validated solution with on-going validation and testing of enhancements and upgrades. Microsoft manages and maintains private cloud infrastructure and software. It allows customers to focus on developing and running workloads in your private clouds.

In this blog post I will be configuring Virtual Machines running on VMware Azure Solution can access Azure files over azure private end point. This is a end to end four step process describe as below:

and explained in this video:

Here is Step-by-Step process of configuring and accessing Azure Files on Azure VMware Solution:

Step -01 Deploy Azure VMware SDDC

Azure VMware Solution provides customers a private clouds that contain vSphere clusters, built on dedicated bare-metal Azure infrastructure. The minimum initial deployment is three hosts, and additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster. All provisioned private clouds have vCenter Server, vSAN, vSphere, and NSX-T. customers can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds.

In this blog, i am not going to cover AVS deployment process as this blog post is more focused on Azure Files integration, you can follow below process for the deployment of VMware Azure solution or check official documentation Here

Step -02 Create ExpressRoute to Connect to Azure Native Services

In this section we need to defined whether to use an existing or new ExpressRoute virtual network gateway and follow this decision tree for your AVS to Azure Native Services configuration.

Diagram showing the workflow for connecting Azure Virtual Network to ExpressRoute in Azure VMware Solution.

For this blog post, I will create a new vNET and new ExpressRoute and attach that vNET to Azure VMware Solution, so first thing first, Deploy your Azure VMware Solution and once done, go ahead and create an Azure Virtual Network and Virtual Network Gateway

Create Azure virtual network
  • On the Virtual Network page, select Create to set up your virtual network for your private cloud.
  • On the Create Virtual Network page, enter the details for your virtual network.
  • On the Basics tab, enter a name for the virtual network, select the appropriate region, and select Next
  • IP Addresses. (NOTE:-You must use an address space that does not overlap with the address space you used when you created your private cloud), Select + Add subnet, and on the Add subnet page, give the subnet a name and appropriate address range. When complete, select Add.
  • Select Review + create.
create AN virtual network gateway

Now that we have created a virtual network, we will now create a virtual network gateway.On the Virtual Network gateway page, select Create. On the Basics tab of the Create virtual network gateway page, provide values for the fields, and then select Review + create.

SubscriptionPre-populated value with the Subscription to which the resource group belongs.
Resource groupPre-populated value for the current resource group. Value should be the resource group you created in a previous test.
NameEnter a unique name for the virtual network gateway.
RegionSelect the geographical location of the virtual network gateway as AVS
Gateway typeSelect ExpressRoute.
SKULeave the default value: standard.
Virtual networkSelect the virtual network you created previously. If you don’t see the virtual network, make sure the gateway’s region matches the region of your virtual network.
Gateway subnet address rangeThis value is populated when you select the virtual network. Don’t change the default value.
Public IP addressSelect Create new.
Connect ExpressRoute to the virtual network gateway

Let’s go on the Azure portal, navigate to the Azure VMware Solution private cloud and select Manage > Connectivity > ExpressRoute and then select + Request an authorization key.

Provide a name for it and select Create. It may take about 30 seconds to create the key. Once created, the new key appears in the list of authorization keys for the private cloud.

Copy the authorization key and ExpressRoute ID. we will need them to complete the peering, now navigate to the virtual network gateway and select Connections > + Add

On the Add connection page, provide values for the fields, and select OK.

FieldValue
NameEnter a name for the connection.
Connection typeSelect ExpressRoute.
Redeem authorizationEnsure this box is selected
Virtual network gatewayThe virtual network gateway that we deployed above
Authorization keyPaste the authorization key copied in previous step
Peer circuit URIPaste the ExpressRoute ID copied in previous step

The connection between your ExpressRoute circuit and your Virtual Network is created successfully.

To test connectivity,I have deployed a VM on Azure VMware Solution and one VM in Native Azure and I can reach to both the VMs. I took console of AVS VM and can RDP to Azure Native VM and ping from Azure Native VM to VM deployed in AVS, this ensures that now we have successfully established connectivity between AVS and Azure Native.

Step -03 Create Storage and File Shares

Now lets move to Step -03, Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares.

Once you are on Azure portal, Click on “Create” under Storage account and create an storage account in the same region where we have Azure VMware Solution deployed. We will use this storage account to configure “Azure Files” over private link.

Once Storage Account is created, Lets get in to the networking section of storage account, which allows you to configure networking options. In addition to the default public endpoint for a storage account, Azure Files provides the option to have one or more private endpoints.

A private endpoint is an endpoint that is only accessible within an Azure virtual network and by AVS Network. When you create a private endpoint for your storage account, your storage account gets a private IP address from within the address space of your virtual network, much like how an on-premises file server or NAS device receives an IP address within the dedicated address space of your on-premises network.

Let’s create a “Private EndPoint by clicking on “+ Private endpoint”

Enter basic information as well choose “Region” should be in same region where your AVS has been deployed.

On the Next screen ensure to choose “Target sub-resource” – “file”

Select Azure Virtual Network and subnet that we created in step -02 and click on create

Once in the storage account, select the File shares and click on “+ File share”. The new file share blade should appear on the screen. Complete the fields in the new file share blade to create a file share:

  • Name: the name of the file share to be created.
  • Quota: the quota of the file share for standard file shares; the provisioned size of the file share for premium file shares.
  • Tiers: the selected tier for a file share. 

Now share is created, to Mount this share, Select File shares, which we need to mount and then click on “Connect

Select the drive letter to mount the share to ,choose Authentication method and Copy the provided script.

Step -04 Access Files over SMB

On the windows server which is running on Azure VMware Solution, Paste the script into a shell on the host you’d like to mount the file share to, and run it.

This should mount the “Azure File” to your windows server as Z: drive, which you can use to transfer/store any data that you want to transfer/store.

In case if you are facing issue while accessing file share using Host DNS Name, take private IP of the share connection by clicking on “Network Interface” and copy the private IP

Add this private IP Address in to windows servers Hosts file, then it should work as expected.

This completes integration of Azure VMware Solution to Azure Files (which is azure native service) over the private link, similarly Customers can use many more services of Azure Native those can be easily integrated with Azure VMware Solution.

Windows Bare Metal Servers on NSX-T overlay Networks

Featured

In this post, I will configure Windows 2016/2019 bare metal server as an transport node in NSX-T and then also will configure a NSX-T overlay segment on a Windows 2016/2019 server bare metal server, which allow VM and bare metal server on the same network to communicate.

To use NSX-T Data Center on a windows physical server (Bare Metal server), let’s first understand few terminologies which we will use in this post.

  • Application – represents the actual application running on the physical server server, such as a web server or a data base server.
  • Application Interface – represents the network interface card (NIC) which the application uses for sending and receiving traffic. One application interface per physical server server is supported.
  • Management Interface – represents the NIC which manages the physical server server.
  • VIF – the peer of the application interface which is attached to the logical switch. This is similar to a VM vNIC.

Now lets configure our windows server to operate in NSX overlay environment:

Enable WinRM service on Windows 2019

First of all we need to enable Windows Remote Management (WinRM) on Windows Server 2016/2019 to allow the Windows server to interoperate with third-party software and hardware. To enable the WinRM service with a self-signed certificate, Run:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
PS$ wget -o ConfigureWinRMService.ps1 https://raw.github.com/vmware/bare-metal-server-integration-with-nsxt/blob/master/bms-ansible-nsx/windows/ConfigureWinRMService.ps1
PS$ powershell.exe -ExecutionPolicy ByPass -File ConfigureWinRMService.ps1.

Run the following command to verify the configuration of WinRM listeners:

winrm e winrm/config/listener

NOTE- For production bare metal servers, please enable winrm with HTTPS for security reasons and procedure is explained here

Installing NSX-T Kernel Module on Windows 2019 Server

Now let’s proceed with installing the NSX kernel module on the Windows Server 2016/2019 bare metal server. Make sure to download NSX kernel module for Windows server 2016/2019 with the same version of your NSX-T instance from VMware downloads

Start the installation of the NSX kernel module by executing the .exe file on your Windows BM server.

Configure the bare metal server as a transport node in NSX-T

Before we add the bare metal server as a transport node, we must need to create a new uplink profile in NSX-T that we are going to use for the bare metal servers. An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can include teaming policies, active and standby links, transport VLAN ID, and MTU setting.

In my Lab the windows 2016/2019 bare metal server will have two network adapters, one NIC in the management VLAN and the other one is on a TEP VLAN (VLAN160).

once uplink profile is configured, We can now proceed with adding the Windows 2016/2019 bare metal server as transport node into NSX-T. In the NSX-T web page go to System –> Fabric –> Nodes and click on +ADD

Enter Management Interface IP address of your windows bare metal host and its credential, and do not change the Installation Location, it will validate your credentials against windows BM and then will allow you to move next

On the next screen, choose virtual switch name or leave it default, select overlay transport zone as we are connecting this to overlay and select uplink profile and management uplink interface.

on the next screen, configure IP address, Subnet and GW for TEP interface, this could be using specifying static IP list or choosing an IP pool which belongs to TEP VLAN.

Click on Next , This will start preparing your Winodws BM for NSX-T

​Once preparation/config completed, we can attach segment from above screen or we can Continue Later, lets click on “Continue Later” for now, we will add in different step.

Now if you see your windows BM in NSX-T console, it is ready for NSX-T and asking us to attach an overlay segment.

Attach Overlay Segement

Select host in the “Host Transport Nodes” section and click on “Action” and then click on “Manage Segment” which takes you to same screen that SELECT SEGMENT would have during original deployment

now select which segment the Application Interface for the Physical Server will reside on and click on “ADD SEGMENT PORT”

​Add Segment Port and Attach Application Interface

On the add Segment port screen:

Choose Assign New IP (This will be your application IP on Windows BM) – > NSX Interface Name (Default is “nsx-eth”) – This is Application Interface Name on Physical Server

Default Gateway –> Provide – T0 or T1 Gateway address

IP Assignment – i am doing Static, but you can also do DHCP or IP Pool for application interface.

Save –Once save is pressed, configuration is sent to Physical Server and you can see on physical server that Application IP has been assigned to an virtual interface.

Now you can see host config in NSX-T Manager console, everything is green and showing up.

Now if you see you can reach to this Bare Metal from a VM with IP address “172.16.20.101” which is on the same segment as this physical server without doing bridging.

if you click on windows server , you can see other information and specifically the “Geneve Tunnels” between ESXi host on which VM is running and Windows BM host on which your application is running.

This completes the configuration, this gives customers/partners an opportunity to run VM and Bare Metal servers on same network and security (like micro-segmentation) can be managed from single console that is NSX-T console. i hope this helps. please share your feedback 🙂

Cloud Native Runtimes for Tanzu

Featured

Dynamic Infrastructure

This is an IT concept whereby underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. Modern Cloud Infrastrastructure built on VM and Containers requires automated:

  • Provisioning, Orchestration, Scheduling
  • Service Configuration, Discovery and Registry
  • Network Automation, Segmentation, Traffic Shaping and Observability

What is Cloud Native Runtimes for Tanzu ?

Cloud Native Runtimes for VMware Tanzu is a Kubernetes-based platform to deploy and manage modern Serverless workloads. Cloud Native Runtimes for Tanzu is based on Knative, and runs on a single Kubernetes cluster. Cloud Native Runtime automates all the aspects of dynamic Infrastructure requirements.

Serverless ≠ FaaS

ServerlessFaaS
Multi-Threaded (Server)Cloud Provider Specific
Cloud Provider AgnosticSingle Threaded Functions
Long lived (days)Shortly Lived (minutes)
offer more flexibilityManaging a large number of functions can be tricky

Cloud Native Runtime Installation

Command line Tools Required For Cloud Native Runtime of Tanzu

The following command line tools are required to be downloaded and installed on a client workstation from which you will connect and manage Tanzu Kubernetes cluster and Tanzu Serverless.

kubectl (Version 1.18 or newer)

  • Using a browser, navigate to the Kubernetes CLI Tools (available in vCenter Namespace) download URL for your environment.
  • Select the operating system and download the vsphere-plugin.zip file.
  • Extract the contents of the ZIP file to a working directory.The vsphere-plugin.zip package contains two executable files: kubectl and vSphere Plugin for kubectl. kubectl is the standard Kubernetes CLI. kubectl-vsphere is the vSphere Plugin for kubectl to help you authenticate with the Supervisor Cluster and Tanzu Kubernetes clusters using your vCenter Single Sign-On credentials.
  • Add the location of both executables to your system’s PATH variable.

kapp (Version 0.34.0 or newer)

kapp is a lightweight application-centric tool for deploying resources on Kubernetes. Being both explicit and application-centric it provides an easier way to deploy and view all resources created together regardless of what namespace they’re in. Download and Install as below:

ytt (Version 0.30.0 or newer)

ytt is a templating tool that understands YAML structure. Download, Rename and Install as below:

kbld (Version 0.28.0 or newer)

Orchestrates image builds (delegates to tools like Docker, pack, kubectl-buildkit) and registry pushes, works with local Docker daemon and remote registries, for development and production cases

kn

The Knative client kn is your door to the Knative world. It allows you to create Knative resources interactively from the command line or from within scripts. Download, Rename and Install as below:

Download Cloud Native Runtimes for Tanzu (Beta)

To install Cloud Native Runtimes for Tanzu, you must first download the installation package from VMware Tanzu Network:

  1. Log into VMware Tanzu Network.
  2. Navigate to the Cloud Native Runtimes for Tanzu release page.
  3. Download the serverless.tgz archive and release.lock
  4. Create a directory named tanzu-serverless.
  5. Extract the contents of serverless.tgz into your tanzu-serverless directory:
#tar xvf serverless.tar.gz

Install Cloud Native Runtimes for Tanzu on Tanzu Kubernetes Grid Cluster

For this installation i am using a TKG cluster deployed on vSphere7 with Tanzu.To install Cloud Native Runtimes for Tanzu on Tanzu Kubernetes Grid: First target the cluster you want to use and verify that you are targeting the correct Kubernetes cluster by running:

#kubectl cluster-info

Run the installation script from the tanzu-serverless directory and wait for progress to get over

#./bin/install-serverless.sh

During my installation, I faced couple of issues like this..

i just rerun the installation, which automatically fixed these issues..

Verify Installation

To verify that your serving installation was successful, create an example Knative service. For information about Knative example services, see Hello World – Go in the Knative documentation. let’s deploy a sample web application using the kn cli. Run:

#kn service create hello --image gcr.io/knative-samples/helloworld-go - default

Take above external URL and either add Contour IP with host name in local hosts file or add an DNS entry and browse and if everything is done correctly your first application is running sucessfully.

You can list and describe the service by running command:

#kn service list -A
#kn service describe hello -n default

It looks like everything is up and ready as we configured it. Some other things you can do with the Knative CLI are to describe and list the routes with the app:

#kn route describe hello -n default

Create your own app

This demo used an existing Knative example, why not make our own app from an image, let do it using below yaml:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld
  namespace: default
spec:
 template:
  spec:
   containers:
     - image: gcr.io/knative-samples/helloworld-go
       ports:
             - containerPort: 8080
       env:
        - name: TARGET
          value: "This is my app"

Save this to k2.yaml or something which you like, now lets deploy this new service using the kubectl apply command:

#kubectl apply -f k2.yaml

Next, we can list service and describe new deployment, as per the name provided in the YAML file:

and now finally browse the URL by going to http://helloworld.default.example.com (you would need to add entry in DNS or hosts files)

This proves your application is running successfully, Cloud Native Runtimes for Tanzu is a great way for developers to move quickly go on serverless development with networking, autoscaling (even to zero), and revision tracking etc that allow users to see changes in apps immediately. GO ahead and try this in your Lab and once GA in production.

Quick Tip – Delete Stale Entries on Cloud Director CSE

Featured

Container Service Extension (CSE) is a VMware vCloud Director (VCD) extension that helps tenants create and work with Kubernetes clusters.CSE brings Kubernetes as a Service to VCD, by creating customized VM templates (Kubernetes templates) and enabling tenant users to deploy fully functional Kubernetes clusters as self-contained vApps.

Due to any reason, if tenant’s cluster creation stuck and it continue to show “CREATE:IN_PROGRESS” or “Creating” for many hours, it means that the cluster creation has failed for unknown reason, and the representing defined entity has not transitioned to the ERROR state .

Solution

To fix this, provider admin need to get in to API and delete these stale entries, there are very few simple steps to clean those stale entries.

First – Let’s get the “X-VMWARE-VCLOUD-ACCESS-TOKEN” for API calls by calling below API call:

  • https://<vcd url>/cloudapi/1.0.0/sessions/provider
  • Authentication Type: Basic
  • Username/password – <adminid@system>/<password>

Above API call will return “X-VMWARE-VCLOUD-ACCESS-TOKEN”, inside header section of response window. copy this token and use as “Bearer” token in subsequent API calls.

Second – we need to get the “Cluster ID” of the stale cluster which we want to delete, and to get “Cluster ID” – Go in to Cloud Director Kubernetes Container Extension and click on cluster which is stuck and get Cluster IP in URN Format.

Third (Optional) – Get the cluster details using below API call and using authentication using Bearer token , which we first step:

Get  https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Fourth – Delete the stale cluster using below API call by providing “ClusterID“, which we captured in second step and using authenticate type a “Bearer Token

Delete https://<vcd-fqdn>/cloudapi/1.0.0/entities/<cluster-id>/

Above API call should respond with “204 No Content”, it means API call has been executed sucessfully.

Now if you login to Cloud Director “Kubernetes Container Cluster” extension, above API call must have deleted the stale/stuck cluster entry

Now you can go to Cloud Director vAPP section and see if any vAPP/VM is running for that cluster , shutdown that VM and Delete it from Cloud Director. simple three API calls to complete the process.

VMware Cloud Director Assignable Storage Policies to Entity Types

Featured

Service providers can use storage policies in VMware Cloud Director to create a tiered storage offering like: Gold, Silver and Bronze or even offer dedicated storage to tenants. With the enhancement of storage policies to support VMware Cloud Director entities, Now providers has the flexibility to control how tenant use the storage policies. Providers can have not only tiered storage, but isolated storage for running VMs, containers, edge gateways, Catalog and so on.
A common use case that this Cloud Director 10.2.2 update addresses is the need for shared storage across clusters or offering lower cost storage for non-running workloads. For example, instead of having a storage policy with all VMware Cloud Director entities, you can break your storage policy into a “Workload Storage Policy” for all your running VMs and containers, and dedicate a “Catalog Storage Policy” for longer term storage. A slower or low cost NFS option can back the “Catalog Storage Policy”, while the “Workload Storage Policy”  can run on vSAN.

Starting with VMware Cloud Director 10.2.2, if Provider do not want a provider VDC storage policy to support certain types of VMware Cloud Director entities, you can edit and limit the list of entities associated with the policy, here is the list of supported entity types:

  • Virtual Machines – Used for VMs and vApps and their disks
  • VApp/VM Templates – Used for vApp Templates
  • Catalog Media – Used for Media inside catalogs
  • Named Disks – Used for Named disks
  • TKC – Used for TKG Clusters
  • Edge Gateways – Used for Edge Gateways

You can limit the entity types that a storage policy supports to one or more types from this list. When you create an entity, only the storage policies that support its type are available.

Use Case – Catalog-Only Storage Policy

There are many use cases of using assignable storage policies, i am demonstrating this one because many of providers has asked this feature in my discussions. so for this use case we will take entity type – Media, VApp Template

Adding Media, VApp Template entity types to a storage policy marks a storage policy as being able to be used with VDC catalogs.These entity types can be added at the PVDC storage policy layer. Storage policies that are associated with datastores that are intended for catalog-only storage can be marked with these entity types, to force only catalog-related items in to catalog only storage datastore.

When added: VCD users will be able to use this storage policy with Media/Templates. In other words, tenants will see this storage policy as an option when pre-provisioning their catalogs on a specific storage policy.

  • In Cloud Director provider portal, select Resources and click Cloud Resources.
  • select Provider VDCs, and click the name of the target provider virtual data center.
  • Under Policies, select Storage
  • Click the radio button next to the target storage policy, and click Edit Supported Types.
  • From the Supports Entity Types drop-down menu, select Select Specific Entities.
  • Select the entities that you want the storage policy to support, and click Save.

Validation

Lets validate this functionality by login as a tenant and go into “Storage Policies” settings and here we can see this Org VDC has two storage Policy assigned by provider.

Now lets deploy a virtual machine in the same org VDC and you can see that policy “WCP” which was marked as catalog only is not available for VM provisioning.

In the same Org VDC lets create a new “Catalog”, here you can see both the policies are visible, one exclusively for “Catalog” and another one which is allowed for all entity types.

Policy Removal: VCD users will not be able to use this storage policy with Media/Templates but whatever is there will continue to be there.

This addition to Cloud Director gives opportunity to providers to manage storage based on entity type , This is the one use case similarly one particular storage can be used for Edge Placement , another one could be used to spin up production grade Tanzu Kubernetes Cluster while default storage can be used by CSE native Kubernetes cluster for development container workloads. This opens up new Monetization opportunities for provider, upgrade your Cloud Director environment and start monetizing.

This Post is also available as Podcast

Auto Scale Applications with VMware Cloud Director

Featured

Starting with VMware Cloud Director 10.2.2, Tenants can auto scale applications depending on the current CPU and memory utilization. Depending on predefined criteria for the CPU and memory use, VMware Cloud Director can automatically scale up or down the number of VMs in a selected scale group.

Cloud Director Scale Groups are a new top level object that tenants can use to implement automated horizontal scale-in and scale-out events on a group of workloads. You can configure auto scale groups with:

  • A source vApp template
  • A load balancer network
  • A set of rules for growing or shrinking the group based on the CPU and memory use

VMware Cloud Director automatically spins up or shuts down VMs in a scaling group based on above three settings.This blog post will help you to enable Scale Goups in Cloud Director and also we will configure Scale Groups.

Configure Auto Scale

Login to Cloud Director 10.2.2 cell with admin/root credential and eanble metric data collection and publishing by setting up the metrics collection in a Cassandra database or collect metrics without metrics data persistence. in this post we are going to configure without metrics data persistence, to collect metrics data without data persistence, run the following commands:

#/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n statsFeeder.metrics.collect.only -v true 

#/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n statsFeeder.metrics.publishing.enabled -v true

Now the second step is to create a file named “metrics.groovy” file in the /tmp folder of cloud director appliance with the following contents:

configuration {
    metric("cpu.ready.summation") {
        currentInterval=20
        historicInterval=20
        entity="VM"
        instance=""
        minReportingInterval=300
        aggregator="AVERAGE"
    }
}

Change the file permission appropriatly and import the file using below command:

$VCLOUD_HOME/bin/cell-management-tool configure-metrics --metrics-config /tmp/metrics.groovy

Lets enable auto scalling plugin by running below commands on Cloud Director Cell:

$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --set enabled=true
$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --set username=<username> (should having admin priveldge account)
$VCLOUD_HOME/bin/cell-management-tool configure-autoscale --encrypt --set password=<password> ( Password for the above account)

If you see in the Cloud Director provider console, under customize portal , there will be a plugin which provider can enable to tenants who needs auto scale functionality for the applications or can be made available to all tenants.

Since Auto Scale released with VMware Cloud Director 10.2.2, which allows service providers to grant tenant rights to create scale groups.Under Tenant Access Control, select Rights Bundles and then select the “vmware:scalegroup Entitlement” bundle, and click Publish.

Also ensure provider Add the necessary VMWARE:SCALEGROUP rights to the tenant roles that want to use scale groups.

Tenant Self Serive

On the Tenant portal after login, select Applications and select the Scale Groups tab and Click New Scale Group.

Select an organization VDC in which tenant want to create the scale group.

Or Tenant can also access scale groups from a selected organization virtual data center (VDC) by going in to specific organization VDC

  • Enter a name and a description of the new scale group.
  • Select the minimum and maximum number of VMs to which you want the group to scale and click Next.

Select a VM template for the VMs in the scale group and a storage policy, and click Next. Template needs to be pre-populated in catalog by tenant or can be by provider and published for tenants.

Next step is to select a network for the scale group.If Tenants oVDC is backed by NSX-T Data Center and NSX ALB (AVI) has been published as a load balancer by provider, then tenant can choose NSX ALB load Balancer edge on which load balacing has been enabled and Server pool has been setup by tenant before enabling Scale Groups.

If tenant want to manage the load balancer on your own or if there is no need for a load balancer, thenn select I have a fully set-up network, Auto Scale will automatically adds VM to this network.

VMware Cloud Director starts the initial expansion of the scale group to reach the minimum number of VMs, besically it will start creating a VM from template that tenant selected while creating scale group and it will continue to spin up to minimum number VM spacified in the scale group section.

Add an Auto Scaling Rule

  1. Click Add Rule.
  2. Enter a name for the rule.
  3. Select whether the scale group must expand or shrink when the rule takes effect.
  4. Select the number of VMs by which you want the group to expand or shrink when the rule takes effect.
  5. Enter a cooldown period in minutes after each auto scale in the group.The conditions cannot trigger another scaling until the cooldown period expires and cooldown period resets when any of the rules of the scale group takes effect.
  6. Add a condition that triggers the rule.The duration period is the time for which the condition must be valid to trigger the rule. To trigger the rule, all conditions must be met.
  7. To add another condition, click Add Condition. tenant can add multiple conditions.
  8. Click Add.

From the details view of a scale group, when you select Monitor, you can see all tasks related to this scale group. For example, you can see the time of creation of the scale group, all growing or shrinking tasks for the group, the rules that initiated the tasks, in side Virtual Machine section which VM has been created at what time and what is the IP address of the VM etc…

here is the section showing scale takes which trigered and its status.

Virtual Machine section as said provides scale group VM information anf its details.

here is the another section showing scale takes which trigered and thier status , start time and completion time.

Auto Scale Group in Cloud Director 10.2.2 brings very important functionality nativily to cloud director which does not require any external configuration like vRealize Orchestrator or vRops and does not incure addional cost to tenant or provider. go ahead uprade your cloud director , enable it and let the tenant enjoy this cool functionality.

vSphere Tanzu with AVI Load Balancer

Featured

With the release of the vSphere 7.0 Update 2, VMware now adds new Load Balancer option for vSphere with Tanzu which provides production-ready load balancer option for your vSphere with Tanzu deployments. This Load Balancer is called NSX Advanced Load Balancer, or NSX ALB or AVI Load Balancer, This will provide Virtual IP addresses for the Supervisor Control Plane API server, the TKG guest cluster API servers and any Kubernetes applications that require a service of type Load Balancer. In this post, I will go through a step-by-step deployment of the new NSX ALB along with vSphere with Tanzu.

VLAN & IP address Planning

There are many way to plan IP, in this Lab I will place management ,VIPs and workload nodes on three different networks. For this deployment , I will be using three VLANs, One for Tanzu Management, One for Frontend or VIP and one for Supervisor cluster and TKG clusters , here is my IP Planning sheet:

Deploying & Configuring NSX ALB (AVI)

Now Lets deploy NSX ALB controller (AVI LB) by following very similar process that we follow to deploy any other OVA and I will be assigning NSX ALB IP management address from management network address range. The NSX ALB is available as an OVA. In this deployment, I am using version 20.1.4.The only information required at deployment time are

  • A static IP Address
  • A subnet mask
  • A default gateway
  • An sysadmin login authentication key

I have deployed one controller appliance for this Lab but if you are doing production deployment , it is recommended to create three node controller cluster for high availability and better performance.

Once OVA deployment completes, power on the VM and wait for some time before you browse NSX ALB url using the IP address provided while deployment and login to Controller and then:

  • Enter DNS Server Details and Backup Passphrase
  • Add NTP Server IP address
  • Provide Email/SMTP details ( not mandatory)

Next Choose VMware vCenter as your “Orchestrator Integration”, This creates a new cloud configuration in the NSX ALB called as Default-Cloud. Enter below details on next screen:

  • Insert IP of your vCenter,
  • vCenter Credential
  • Permission – Write Permission
  • SDN Integration – None
  • Select appropriate vCenter “Data Center”
  • For Default Network IP Address Management – Static

On next screen, we define the IP address pool for the Service Engines.

  • Select Management Network (on this Network “management interface” of “service engine” will get connected)
  • Enter IP Subnet
  • Enter Free IP’s in to IP Address Pool section
  • Enter Default Gateway

Select No for configuring multiple Tenants. Now we’re ready to get into the NSX ALB configuration.

Create IPAM Profile

IPAM will be used to assign VIPs to Virtual Services, Kubernetes control planes and applications running inside pods. to create IPAM Go to: Templates -> Profiles -> IPAM/DNS Profiles

  • Assign Name to the Profile , This IPAM will be for “frontend” Network
  • Select Type – “Avi Vantage IPAM
  • Cloud for Usable Network – Choose “Default-Cloud
  • Usable Network – Choose Port group, in my case “frontend” ( all vCenter port groups will get populated automatically by vCenter discovery)

Create and Configure DNS profile as below: ( This is optional)

Go to “Infrastructure” and click on “Cloud” and edit “Default Cloud” and update IPAM Profile and DNS profiles with the IPAM profile and DNS profile that we created above.

 Configure the VIP Network

On NSX ALB console , go to “Infrastructure” and then “Networks” , this will display all the network discovered by NSX ALB. Select “frontend” network and Click on Edit

  • Click on “Add Subnet
  • Enter subnet , in my case – 192.168.117.0/24
  • Click on Static IP Address pool:
  • Ensure “Use Static IP Address for VIPs and SE” is selected
    • and enter IP Segment Pool , in my case 192.168.117.100-192.168.117.200
    • Click on Save

Create New Controller Certificate

Default AVI certificate doesn’t contain IP SAN and can’t be used by vCenter/Tanzu to connect to AVI, so we need to create a custom controller and use it during Tanzu management plane deployment. let’s create controller certificate by going to Templates -> Security -> SSL/TLS Certificates -> Create -> Controller Certificate

Complete the Page with required information and make sure “Subject Alernative Name (SAN)” is NSX ALB controller IP/Cluster IP or hostname.

Then go to Administration -> Settings -> Access Settings and edit System Access Settings:

Delete all the certificates in SSL/TLS certificate filed and choose the certificate that we created in above section.

Go to Template->Security->SSL/TLS Certificates, Copy the certificate we created to use while enabling Tanzu Management plane

Configure Routing

Since the workload network (192.168.116.0/24) is on a different subnet from the VIP network (192.168.170.0/24), we need to add a static route in NSX ALB controller, Go the Infrastructure page, navigate to Routing and then to Static Route. Click the Create button and create static routes accordingly.

Enable Tanzu Control Plane (Workload Management)

I am not going to go through the full deployment of workload management, these are similar steps detailed HERE . However, there are a few steps that are different:

  • On page 6 Choose Type=AVI as your Load Balancer type.
  • there is no load balancer IP Address range required, this is now provided by the NSX ALB.
  • the Certificate we need to provide, should be of NSX ALB which we created in previous step.

The new NSX Advanced Load Balancer is far superior to the HA-Proxy specially in provider environment. The providers can deploy, offer and manage K8 clusters with VMware supported LB type even though the configuration requires a few additional steps, it is very simple to setup. The visibility provided into health and usage of the virtual services are going to be extremely beneficial for day-2 operations, and should provide great insights for those providers who are responsible for provisioning and managing Kubernetes distributions running on vSphere. Feel free to share any feedback…

Tanzu Basic – Building TKG Cluster

Featured

In Continuation to our Tanzu Basic deployment series , this is the last part and by now we have our vSphere with Tanzu cluster enabled and deployed, now the next step would be to create Tanzu Kubernetes Clusters. In case if you missed previous posts , here they are:

  1. Getting Started with Tanzu Basic
  2. Tanzu Basic – Enable Workload Management

Create a new namespace

vSphere Namespaces is kind of a resource pool or a container that i can give to a project, team or customer a “Kubernetes+VM environment” where they can create and manage their application containers and virtual machines. They can’t see the other’s environment and they can’t expand past their limits set by Administrators. The vSphere Namespace construct allows the vSphere admin to set several policies in one place. The user/team/customer/project can create new workloads to their desire within their vSphere Namespace. You also set resources limits to the namespace and permissions so that DevOps engineers can access it. Let’s create our first name space by going to vCenter Menu and Click on “Workload Management”

Once you are in “Workload Management” place , click on “CREATE NAMESPACE”

Select the vSphere Cluster on which you enabled “workload management”

  1. Give DNS compliant name of the namespace
  2. Select Network for the namespace

Now we have successfully created “namespace” named “tenant1-namespace”

Next step is to Add Storage. here we need to choose a vCenter Storage policy which TKG will use to provisition control plane VMs as well as this policy will show up as a Kubernetes Storage Class for this namespace.The persistent volume claims that correspond to persistent volumes can originate from the Tanzu Kubernetes cluster.

After you assign the storage policy, vSphere with Tanzu creates a matching Kubernetes storage class in the Namespace. For the VMware Tanzu Kubernetes Clusters, the storage class is automatically replicated from the namespace to the Kubernetes cluster. When you assign multiple storage policies to the namespace, a separate storage class is created for each storage policy.

Access Namespace

Share the Kubernetes Control Plane URL with DevOps engineers as well as the user name they can use to log in to the namespace through the Kubernetes CLI Tools for vSphere. You can grant access to more than one namespace to a DevOps engineer.

Developer browse the URL and downloads TKG CLI plugin for their environment (Windows, Linux or MAC)

To provision Tanzu Kubernetes clusters by using the Tanzu Kubernetes Grid Service, we connect to the Supervisor Cluster by using the vSphere Plugin for kubectl which we downloaded from above step and authenticate with your vCenter Single Sign-On credentials, which was given by vSphere admin to developer.

After you log in to the Supervisor Cluster, the vSphere Plugin for kubectl generates the context for the cluster. In Kubernetes, a configuration context contains a cluster, a namespace, and a user. You can view the cluster context in the file .kube/config. This file is commonly called the kubeconfig file.

I am switching to “tenant1-namespace” context as i have access to multiple name spaces , similarly devops user can switch to context by following command.

Below commands to explore and help you to find out right VM type for Kubernetes Clusters sizing:

#kubectl get sc

This command will list down all the storage classes

#kubectl get virtualmachineimages

This command will list down all the VM images available for creating TKG clusters, this will help you decide the Kubernetes version that you want to use

#kubectl get virtualmachineclasses

This command will list all the machine classes (T-Shirt sizes) available for TKG clusters

Deploy a TKG Cluster

To deploy a TKG cluster we need to create a YAML file with the required configuration parameters to define the cluster.

  1. Above YAML provisions a cluster with a three control plane nodes and three worker nodes.
  2. The apiVersion and kind parameter values are constants.
  3. The Kubernetes version, listed as v1.18, is resolved to the most recent distribution matching that minor version.
  4. The VM class best-effort-<size> has no reservations. For more information, see Virtual Machine Class Types for Tanzu Kubernetes Clusters.

Once file is ready, lets provision the Tanzu Kubernetes cluster using the following kubectl command:

Monitor cluster provisioning using the vSphere Client , TKG management plane creating Kubernetes cluster automatically

Verify cluster provisioning using the following kubectl commands.

you can continue to monitor/verify cluster provisioning using the #kubectl describe tanzukubernetescluster command, at the last of the command output , it shows:

Node Status – It shows nodes status from Kubernetes prospective

VM Status – It shows nodes status from vCenter prospective

After around 15/20 minutes, you should see VM & Node Status as ready and it will also show the Phase as Running.This completes deployment of Kubernetes cluster on vSphere7 with Tanzu and we successfully deployed a Kubernetes cluster, now lets deploy a application and expose to external world.

Deploy an Application

To deploy your first application we need to login to new cluster that we created , you can use below command”

#kubectl vsphere login --server=IP-ADDRESS --vsphere-username USERNAME --tanzu-kubernetes-cluster-name CLUSTER-NAME --tanzu-kubernetes-cluster-namespace NAMESPACE

Once login completed, we can deploy application workloads to Tanzu Kubernetes clusters using pods, services, persistent volumes, and higher-level resources such as Deployments and Replica Sets. lets deploy an application using below comand:

#kubectl run --restart=Never --image=gcr.io/kuar-demo/kuard-amd64:blue kuard

Command now has successfully deployed application, lets expose so that we can access it using VMware HA proxy Load Balancer:

#kubectl expose pod kuard --type=LoadBalancer --name=kuard --port=8080 

Application exposed successfully, lets get the public IP which has been assigned to application by the above command , so here is the external IP – 192.168.117.35

Let’s access the application using the IP assigned to application and see if we can easily access the application.

Get Visibility of Cluster using Octant

Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer’s toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities.Installation is pretty simple and detailed Here

This completes the series with the installation of TKG Kubernetes cluster and run an application on top of it and accessing that application using HA proxy. !!Please share your feedback if any!!

Tanzu Basic – Enable Workload Management

Featured

In continuation to last post where we had deployed VMware HA proxy, now we will enable a vSphere cluster for Workload Management, by configuring it as a Supervisor Cluster.

Part-1- Getting Started with Tanzu Basic – Part1

What is Workload Management

With Workload Management we can deploy and operate the compute, networking, and storage infrastructure for vSphere with Kubernetes. vSphere with Kubernetes transforms vSphere to a platform for running Kubernetes workloads natively on the hypervisor layer. When enabled on a vSphere cluster, vSphere with Kubernetes provides the capability to run Kubernetes workloads directly on ESXi hosts and to create upstream Kubernetes clusters within dedicated resource pools

Since we selected creating a Supervisor Cluster with the vSphere networking stack in previous post that means vSphere Native Pods will not be available but we can create Tanzu Kubernetes clusters.

Pre-Requisite

As per our HA proxy deployment , we chosen HAProxy VM with three virtual NICs, thus connecting HAProxy to a Frontend network. DevOps users and external services can access HAProxy through virtual IPs on the Frontend network. Below are the pre-requisite to enable Workload Management

  • DRS and HA should be enabled on the vSphere cluster, and ensure DRS is in the fully automated mode.
  • Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
  • Storage Policy: Create a storage policy for the placement of Kubernetes control plane VMs.
    • I have created policy two policies named “basic” & “TanzuBasic”
    • NOTE: You should created policy with lower case policy name
    • This policy has been created with Tag based placement rules
  • Content Library: Create a subscribed content library using URL: https://wp-content.vmware.com/v2/latest/lib.json on the vCenter Server to download VM image that is used for creating nodes of Tanzu Kubernetes clusters. The library will contain the latest distributions of Kubernetes.
  • Add all hosts from the cluster to a vSphere Distributed Switch and create port groups for Workload Networks

Deploy Workload Management

With the release of vSphere 7 update 1 a free trial of Tanzu is available for 60 day evaluation . Enter your details to receive communication from VMware and get started with Tanzu

 Next screen takes you to choose networking options available with vCenter, make sure

  • You choose correct vCenter
  • For networking there are two networking stack, since we haven’t installed NSX-T it will be greyed out and unavailable, choose “vCenter Server Network” and move to “Next”

On next screen you will be presented with vSphere Clusters which are compatible for Tanzu, incase you don’t see any cluster, go on to “Incompatible” section and click on cluster which will give you guidance for the reason of incompatible, go back and fix the reason and try again

Select the size of the resource allocation you need for the Control Plane. For the evaluation, Tiny or Small should be enough and click on Next.

Storage: Select the storage policy which we created as per of pre-requisite and click on Next

Load Balancer: This section is very important and we need to ensure that we provide correct values:

  • Enter a DNS-compliant, don’t use “under-score” in the name
  • Select the type of Load Balancer: “HA Proxy”
  • Enter the Management data plane IP Address. This is our management ip and port number assigned to VMware HA proxy management interface.In our case it is 192.168.115.10:5556.
  • Enter the username and password used during deployment for the HA Proxy
  • Enter the IP Address Ranges for Virtual Server. we need to provide the IP ranges for virtual servers, these are the ip-address which we had defined in the frontend network. It’s the exact same range which we used during deployment of HA-proxy configuration process, but this time we will have to write full range instead of using a CIDR format, in this case i am using: 192.168.117.33-192.168.117.62
  • Finally, enter in the Server CA cert. If you have added a cert during deployment, you would use that. If you have used a self-signed cert then you can retrieve that data from the VM by browsing /etc/haproxy/ca.crt.

Management Network: Next portion is to configure IP address for Tanzu supervisor control plane VM’s, this will be from management IP range.

  • We will need 5 consecutive IPs free from Management IP range, Starting IP Address this is the first IP in a range of five IPs to assign to Supervisor control plane VMs’ management network interfaces.
  • One IP is assigned to each of the three Supervisor control plane VMs in the cluster
  • One IP is used for a Floating IP which we will use to connect to Management plane
  • One IP is reserved for use during upgrade process
  • This will on mgmt port group

Workload Network:

Service IP Address: we can take the default network subnet for “IP Address for Services. change this if you are using this subnet anywhere else. This subnet is for internal communication and it not routed.

And the last network in which we will define the Kubernetes node IP range, this applies to both the supervisor cluster as well as the guest TKG clusters. This range will be from workload IP range which we had created in the last post with vLAN 116.

  • Port Group – workload
  • IP Address Range – 192.168.116.32-192.168.116.63

Finally choose the content library which we had created as a part of pre-requisite

if you have provided right information with correct configuration, it will take around 20 minutes to install and configure entire TKG management plane to consume. you might see few errors while configuring Management plane but you can ignore as those operations will be retried automatically and errors will get clear when that particular task get succeed.

NOTE-Above screenshot has different cluster name as i have taken it from different environment but IP schema is same.

I hope this article helps you to enable your first “Workload Management” vSphere cluster without NSX-T. Next Blog post i will cover deployment of TKG Clusters and others things around that…

Load Balancer as a Service with Cloud Director

Featured

NSX Advance Load Balancer’s (AVI) Intent-based Software Load Balancer provides scalable application delivery across any infrastructure. AVI provides 100% software load balancing to ensure a fast, scalable and secure application experience. It delivers elasticity and intelligence across any environments. It scales from 0 to 1 million SSL transactions per second in minutes. It achieves 90% faster provisioning and 50% lower TCO than traditional appliance-based approach.

With the release of Cloud Director 10.2 , NSX ALB is natively integrated with Cloud Director to provider self service Load Balancing as a Service (LBaaS) where providers can release load balancing functionality to tenants and tenants consume load balancing functionality based on their requirement. In this blog post we will cover how to configure LBaaS.

Here is High Level workflow:

  1. Deploy NSX ALB Controller Cluster
  2. Configure NSX-T Cloud
  3. Discover NSX-T Inventory,Logical Segments, NSGroups (ALB does it automatically)
  4. Discover vCenter Inventory,Hosts, Clusters, Switches (ALB does it automatically)
  5. Upload SE OVA to content library (ALB does it automatically, you just need to specify name of content library)
  6. Register NSX ALB Controller, NSX-T Cloud and Service Engines to Cloud Director and Publish to tenants (Provider Controlled Configuration)
  7. Create Virtual Service,Pools and other settings (Tenant Self Service)
  8. Create/Delete SE VMs & connect to tenant network (ALB/VCD Automatically)

Deploy NSX ALB (AVI) Controller Cluster

The NSX ALB (AVI) Controller provides a single point of control and management for the cloud. The AVI Controller runs on a VM and can be managed using its web interface, CLI, or REST API but in this case Cloud Director.The AVI Controller stores and manages all policies related to services and management. To ensure AVI controllers High Availability we need to deploy 3 AVI Controller nodes to create a 3-node AVI Controller cluster.

Deployment Process is documented Here & Cluster creation Process is Here

Create NSX-T Cloud inside NSX ALB (AVI) Controller

NSX ALB (AVI) Controller which uses APIs to interface with the NSX-T manager and vCenter to discover the infrastructure.here is high level activities to configure NSX-T Cloud in NSX ALB management console:

  1. Configure NSX-T manager IP/URL (One per Cloud)
  2. Provide admin credentials
  3. Select Transport zone (One to One Mapping – One TZ per Cloud)
  4. Select Logical Segment to use as SE Management Network
  5. Configure vCenter server IP/URL (One per Cloud)
  6. Provide Login username and password
  7. Select Content Library to push SE OVA into Content Library

Service Engine Groups & Configuration

Service Engines are created within a group, which contains the definition of how the SEs should be sized, placed, and made highly available. Each cloud will have at least one SE group.

  1. SE Groups contain sizing, scaling, placement and HA properties
  2. A new SE will be created from the SE Group properties
  3. SE Group options will vary based upon the cloud type
  4. An SE is always a member of the group it was created within in this case NSX-T Cloud
  5. Each SE group is an isolation domain
  6. Apps may gracefully migrate, scale, or failover across SEs in the groups

​Service Engine High Availability:

Active/Standby

  1. VS is active on one SE, standby on another
  2. No VS scaleout support
  3. Primarily for default gateway / non-SNAT app support
  4. Fastest failover, but half of SE resources are idle

​Elastic N + M 

  1. All SEs are active
  2. N = number of SEs a new Virtual Service is scaled across
  3. M = the buffer, or number of failures the group can sustain
  4. SE failover decision determined at time of failure
  5. Session replication done after new SE is chosen
  6. Slower failover, less SE resource requirement

Elastic Active / Active 

  1. All SEs are active
  2. Virtual Services must be scaled across at least 2 Service engines
  3. Session info proactively replicated to other scaled service engines
  4. Faster failover, require more SE resources

Cloud Director Configuration

Cloud Director Configuration is two fold, Provider Config and Tenant Config, lets first cover provider Config…

Provider Configuration

Register AVI Controller: Provider administrator login as a admin and register AVI Controller with Cloud Director. provider has option to add multiple AVI controllers.

NOTE – incase if you are registering with NSX ALB’s default self sign certificate and if it throws error while registering , then regenerate self sign certificate in NSX ALB.

Register NSX-T cloud

Now next thing is we need to register NSX-T cloud with Cloud Director, which we had configured in ALB controller:

  1. Selecting one of the registered AVI Controller
  2. Provide a meaning full name to the controller
  3. Select the NSX-T cloud which we had registered in AVI
  4. Click on ADD.

Assign Service Engine groups

Now register service engine groups either “Dedicated” or “Reserved” based on tenant requirement or provider can have both type of groups and assign to tenant based on requirements.

  1. Select NSX-T Cloud which we had registered above
  2. Select the “Reservation Model”
    1. Dedicated Reservation Model:- For each tenant Organization VDC Edge gateway, AVI will create two Service Engine nodes for each LB enabled Org VDC Edge GW.
    2. Shared Reservation Model:- Shared is elastic and shared among all tenants. AVI will create pool of service engines that are going to be shared across tenant. Capacity allocation is managed in VCD, Avi elastically deploys and un-deploys service engines based on usage

Provider Enables and Allocates resources to Tenant

Provider enables LB functionality in the context of Org VCD Edge by following below steps:

  1. Click on Edges 
  2. Choose Edge on which he want to enable load balancing
  3. Go to “Load Balancer” and click on “General Settings”
  4. Click on “Edit”
  5. Toggle on to Activate to activate the load balancer
  6. Select Service Specification

Next step is to assign Service Engines to tenant based on requirement, for that go to Service Engine Group and Click on “ADD” and add one of the SE group which we had registered previously to customer’s one of the Edge.

Provider can restrict usage of Service Engines by configuring:

  1. Maximum Allowed: The maximum number of virtual services the Edge Gateway is allowed to use.
  2. Reserved: The number of guaranteed virtual services available to the Edge Gateway.

Tenant User Self Service Configuration

Pools: Pools maintain the list of servers assigned to them and perform health monitoring, load balancing, persistence.

  1. Inside General Settings some of the key settings are:
    1. Provide Name of the Pool
    2. Load Balancing Algorithm
    3. Default Server Port
    4. Persistence
    5. Health Monitor
  2. Inside Members section:
    1. Add Virtual Machine IP addresses which needs to be load balanced
    2. Define State, Port and Ratio
    3. SSL Settings allow SSL offload and Common Name Check

Virtual Services: A virtual service advertises an IP address and ports to the external world and listens for client traffic. When a virtual service receives traffic, it may be configured to:

  1. Proxy the client’s network connection.
  2. Perform security, acceleration, load balancing, gather traffic statistics, and other tasks.
  3. Forward the client’s request data to the destination pool for load balancing.

Tenant choose Service Engine Group which provider has assigned to tenant, then choose Load Balancer Pool which we created in above step and most important Virtual IP This IP address can be from External IP range of the Org VDC or if you want Internal IP , then you can use any IP.

So in my example, i am running two virtual machines having Org VDC Internal IP addresses and VIP is from external public IP address range, so if I browse VIP , i can reach to web servers sucessfully using VCD/AVI integration.

This completes basic integration and configuration of LBaaS using Cloud Director & NSX Advance Load Balancer. feel free to share feedback.

Configuring Ingress Controller on Tanzu Kubernetes Grid

Featured

Contour is an open source Kubernetes ingress controller providing the control plane for the Envoy edge and service proxy.​ Contour supports dynamic configuration updates and multi-team ingress delegation out of the box while maintaining a lightweight profile.In this blog post i will be deploying Ingress controller along with Load Balancer (LB was deployed in this post).you can also expose Envoy proxy as node port which will allow you to access your service on each k8s node.

What is Ingress in Kubernetes

“NodePort” and “LoadBalancer”  let you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services.

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting

13

Pre-requisite

Before we begin we’ll need to have a few pieces already in place:

  • A Kubernetes cluster (See Here on How to Deploy TKG)
  • kubectl configured with admin access to your cluster
  • You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions which can be downloaded from here

Install Contour Ingress Controller

Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer.To install Contour follow below steps:

  • Downloaded VMware Tanzu Kubernetes Grid Extensions Manifest 1.1.0 in the pre-requisite stage ,move that to your Client VM and unzip it.
    • 1
  • You deploy Contour and Envoy directly on Tanzu Kubernetes clusters. You do not need to deploy Contour on management clusters.
  • Set the context of kubectl to the Tanzu Kubernetes cluster on which to deploy Contour.
    • #kubectl config use-context avnish-admin@avnish
  • First Install Cert-Manager on the k8 cluster
    • kubectl apply -f tkg-extensions-v1.1.0/cert-manager/
    • 2
  • Deploy Contour and Envoy on the cluster using:
    • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/vsphere/
    • 3

This completes installation of Contour Ingress Controller on Tanzu Kubernetes Cluster.let’s deploy an application and test the functionality.

Deploy a Sample Application

Next we need to deploy at least one Ingress object before Contour can serve traffic. Note that as a security feature, Contour does not expose a port to the internet unless there’s a reason it should. A great way to test your Contour installation is to deploy the application

In this example we will deploy a simple web application and then configure load balancing for that application using the Ingress resource and will access it using load balancer IP/FQDN.This application is available within the same folder which we have downloaded from VMware inside example folder. Let’s deploy the application:

  • Run below command to deploy application which will create a new namespace named “test-ingress” , 2 services and one deployment.
    • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/examples/common
    • 4

Very simple way of installing the application, now lets create Ingress resource.

Create Ingress Resource

Let’s imagine a scenario where the “foo” team owns http://www.foo.bar.com/foo and “bar” team owns http://www.foo.bar.com/bar. considering this scenario:

  • Here is Ingress Resource Definition for our example application:
    • apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: https-ingress
        namespace: test-ingress
        labels:
          app: hello
      spec:
        tls:
        - secretName: https-secret
          hosts:
            - foo.bar.com
        rules:
        - host: foo.bar.com
          http:
            paths:
            - path: /foo
              backend:
                serviceName: s1
                servicePort: 80
            - path: /bar
              backend:
                serviceName: s2
                servicePort: 80
  • Lets deploy it using below command:
    • #kubectl apply -f tkg-extensions-v1.1.0/ingress/contour/examples/https-ingress
    • 5
    • Check the status and grub the External IP address of Contour “envoy” proxy.
    • 8
    • Add an /etc/hosts entry to above IP addresses to foo.bar.com

Test the Application

To access the application, browse the foo and the bar services from your desktop which has access to service network.

  • if you browse bar, you will get bar service responding
    • 9
  • if you browse foo, you will get foo service responding
    • 7

This completes the installation and configuration of Ingress on VMware Tanzu Kubernetes Grid K8 cluster. Contour is VMware’s open source version of Ingress controller and offers rich feature to consume and can be found Here and when customer chooses to Taznu portfolio , they get Contour as supported version from VMware.

Getting Started with Tanzu Service on VMware Cloud on AWS

VMware Tanzu Kubernetes Grid (TKG) is a multi-cloud Kubernetes footprint that customers/partners can run both on-premises in vSphere, VMware Cloud on AWS and the public cloud on Amazon EC2 and Microsoft Azure VMs.

TKG provides a Container orchestration through Kubernetes is now built into the vSphere 7 platform.As a VMware Cloud on AWS customer you can take advantage of this new functionality to build Kubernetes clusters in the same platform you’ve grown accustomed to using to manage your virtual infrastructure.

Take control of Cloud Resources and give freedom to Developers based on Personas

Virtualization Administrator: They will be able to define resource allocations and permissions for your users to create their own Kubernetes clusters according to their own specifications.Define access policies, storage policies, memory and CPU restrictions for teams needing Kubernetes access.

Developer or Platform Administrator: They can create new Kubernetes clusters within the defined access policies, upgrade those clusters and scale clusters within the approved resource allocations.

VMware recognizes that not all environments are running on top of vSphere. Tanzu Kubernetes Grid(TKG) leverages the same ClusterAPI engine as VMware Tanzu to manage cluster lifecycles, and can run on any infrastructure. VMware provides three variants of the TKG:

  • Tanzu Kubernetes Grid Multi-Cloud (TKGm): Installer driven wizard to set up Kubernetes environment to run across multi clouds for example: on AWS EC2 or Azure Native VMs
  • Tanzu Kubernetes grid Service (TKGS) aka vSphere With Tanzu: Natively integrated with vSphere7+ and available to customers at no extra cost for basic version on VCF on-prem as well as VMware Cloud on AWS
  • Tanzu Kubernetes Grid Integrated Edition: VMware Tanzu Kubernetes Grid Integrated Edition (formerly known as VMware Enterprise PKS) is a Kubernetes-based container solution with advanced networking, a private container registry, and life cycle management.

Enable Tanzu Service on VMware Cloud on AWS

Pre-requisite:

  • Make sure we have at-least three node SDDC is deployed and running with enough available resources (at least 112 GB of available memory, and has sufficient free resources to support 16 vCPUs)
  • Get Three CIDR blocks for the deployment. These three needs to be ranges that does not overlap with the Management CIDR or any other networks used on-prem or in the VMware Cloud on AWS SDDC.
  • You can activate Tanzu Kubernetes Grid in any SDDC at version 1.16 and later.
  •  If Edge cluster has been configured with medium configuration, then a SDDC cluster requires a minimum of three hosts for activation.
  • If Edge cluster has been configured with Large configuration, then a SDDC cluster requires a minimum of four hosts for activation.

Once pre-requisites are ready, go to VMware Cloud on AWS SDDC and click on “Activate the Tanzu Kubernetes Service”

Activation process will check required resources and will only move ahead if you have pre-requisite completed.

on the next screen:

  • Leave the Service CIDR as default or pick of your choice but non-overlapping and used for Tanzu supervisor services for the cluster
  • Enter the “namespace Network CIDR”, non-overlapping
  • Enter an ‘Ingress CIDR”, non-overlapping
  • Enter an “Egress CIDR”, non-overlapping
  • next Click on “Validate and Proceed”

NOTE: CIDR blocks of size 16, 20, 23, or 26 are supported, and must be in one of the “private address space” blocks defined by RFC 1918 (10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16). 

and finally once validation is done, click on Activate Tanzu Kubernetes Grid

this will start activation process and you should be seeing “Activating Tanzu Kubernetes Grid” on your SDDC tile.This process should get completed within 20-30 minutes.

Such an easy process to make your SDDC enabled for running VMs and Containers together. When activation is completed, login to SDDC vCenter and click on Workload Management

Persona (Virtualization/vSphere Administrator) – vSphere Administrator create a vSphere Namespace on the Supervisor Cluster, sets resource limits to the namespace and permissions so that DevOps engineers can access it. he/she provide the URL of the Kubernetes control plane to DevOps engineers where they can run create their own Kubernetes clusters and run their workloads.

Step -1: Set permissions so that DevOps engineers can access the namespace.

From the Permissions pane, select Add Permissions.

Select an identity source, a user or a group, and a role, and click OK.

Step-2: Set persistent storage to the namespace.Storage policies that you assign to the namespace control how persistent volumes and Tanzu Kubernetes cluster nodes are placed within datastores in the SDDC environment.

From the Storage pane, select Add Storage.

Select a storage policy to control datastore placement of persistent volumes and click OK

The VM class is a VM specification that can be used to request a set of resources for a VM. The VM class is controlled and managed by a vSphere administrator, and defines such parameters as the number of virtual CPUs, memory capacity, and reservation settings. The defined parameters are backed and guaranteed by the underlying infrastructure resources of a Supervisor Cluster.

Workload Management offers several default VM classes. Generally, each default class type comes in two editions: guaranteed and best effort. A guaranteed edition fully reserves resources that a VM specification requests. A best effort class edition does not and allows resources to be overcommitted. Typically, a guaranteed type is used in a production environment.

vSphere Administrator can setup additional limits based on use cases and requirements.

Copy NameSpace URL by clicking on “Copy link” and give it to your DevOps/Platfrom admin)

Persona (DevOps/Platform Administrator)

How to Access and Work ?

Install a new VM (clientvm) or from their desktop/laptop, he/she can access this newly created “Namespace” and then create new Kubernetes cluster. When the new VM is provisioned, power it on and and ssh to it and Download the command line tools from vCenter, make sure the item below in red box is changed to your supervisor cluster address that you copied earlier by running:

#wget https://k8s.Cluster-1.vcenter.sddc-18-139-9-54.vmwarevmc.com/wcp/plugin/linux-amd64/vsphere-plugin.zip

Unzip using below command

Now lets login to the supervisor cluster by entering the following :

kubectl vsphere login --vsphere-username cloudadmin@vmc.local --server=https://k8s.Cluster-1.vcenter.sddc-18-139-9-54.vmwarevmc.com
enter the password for cloudadmin or any other user to complete the login

From here onwards, Devops can create their own K8s clusters and deploy applications, they can also utlize VMware’s multi-cloud mamagement platfrom to spin up kubernetes clusters using GUI.

For Devops to use GUI, vSphere Administrator need to Register VMware Cloud on AWS management cluster with Tanzu Mission Control. lets do that:

Register This Management Cluster with Tanzu Mission Control

Tanzu service ships with a namespace for Tanzu Mission Control. This namespace exists on the Supervisor Cluster where you install the Tanzu Mission Control agent.

The vSphere Namespace provided for Tanzu Mission Control is identified as svc-tmc-cXX

To integrate the Tanzu Kubernetes Grid Service with Tanzu Mission Control, install the agent on the Supervisor Cluster.

Register the Supervisor Cluster with Tanzu Mission Control and obtain the Registration URL. See Register a Management Cluster with Tanzu Mission Control.

On the client-vm, create a .yaml file with below content:

apiVersion: installers.tmc.cloud.vmware.com/v1alpha1
kind: AgentInstall
metadata:
  name: tmc-agent-installer-config
  namespace: <NAMESPACE captured in above step>
spec:
  operation: INSTALL
  registrationLink: <TMC-REGISTRATION-URL captured from TMC console>

Run this yaml file on using:

#kubectl create -f tmc.yaml

you can also check the status of TMC registration by running below command:

#kubectl get pods -n <ns name>

Now go back to Tanzu Mission Control and after some time you should see your Supervisor cluster ready

Devops/Platform admins are now ready to deploy your TKC clustes as well they can deploy containers, this completes this part of blog, in the next part i will write how to create TKC clusters, run applications within containers and how to expose to internet.