Installing Virtual Core
  
Installing Virtual Core
This chapter describes how to install and configure Core-v in iSCSI/block mode (VGC-1000 and VGC-1500) or NFS/file mode (VGC-1500 model only in version 5.0). It includes these sections:
•  Overview of Core-v
•  Hardware and software requirements
•  Installing and configuring Core-v for ESXi
•  Obtaining the Core-v VM image
•  Gathering information
•  Opening the vSphere client
•  Setting the storage mode to NFS
•  Logging in to Core-v
•  Purchasing the token and receiving the licenses
Overview of Core-v
Core-v is a virtualized edition of Core that runs on VMware ESX, ESXi 5.1, ESXi 5.5, and ESXi 6.0. While Core-v can run on ESXi 5.5, certain ESXi features are not supported such as larger LUNs and the SATA driver.
Note: ESXi 5.0 is no longer supported and will not work with Core-v. The minimum version is ESXi 5.1.
Core-v has the same functionality and performance as Core, depending on which mode you intend to configure (NFS/file or iSCSI/block) and how you provision your virtual machine (VM). If you intend to use Core-v in NFS/file mode, you will need to set the storage mode to NFS after you have deployed the
Core-v image. For details, see Setting the storage mode to NFS.
Note: In Core-v version 5.0, NFS/file mode is only supported on the VGC-1500 model.
VMware ESX and ESXi are virtualization platforms that enable you to install and run Core-v as a virtual appliance. For details about VMware ESX and ESXi, go to http://www.vmware.com.
The hardware must support virtual technology. To ensure hardware compatibility, go to http://www.vmware.com/resources/compatibility/search.php.
Note: VM snapshots are not supported by SteelHead (virtual edition) for ESX.
Hardware and software requirements
This section describes the hardware and software requirements for installing and running Core-v.
Hardware requirements for VGC-1000 and VGC-1500
It is important to know the number of branches you plan to support before you allocate resources for your Core-v installation. This table lists the resources we recommend for the indicated maximum number of branches.
Note: In Core-v version 5.0, NFS/file mode is only supported on the VGC-1500 model.
Model
Memory reservation
Disk space
Recommended CPU reservation
Maximum data set size
Maximum number of branches
VGC-1000-U
2 GB
25 GB
2 @ 2.2 GHz
2 TB
5
VGC-1000-L
4 GB
25 GB
4 @ 2.2 GHz
5 TB
10
VGC-1000-M
8 GB
25 GB
8 @ 2.2 GHz
10 TB
20
VGC-1500-L
32 GB
350 GB
8 @ 2.2 GHz
20 TB
30
VGC-1500-M
48 GB
350 GB
12 @ 2.2 GHz
35 TB
30
Note: We strongly recommend that you allocate the correct amount of resources as specified in the table. It is also important to ensure the memory and recommended CPU resource are reserved so that the hypervisor doesn’t share them with other VMs it may be hosting. Reserving these resources will allow the Core-v instance to operate as expected.
By default, Core-v for ESX is configured to support 100 endpoints. To support additional endpoints, provision appropriately.
Core-v for ESX uses hard disk 1 for management and hard disk 2 for stats. When the size of hard disk 2 is increased to accommodate additional endpoints, Core-v resizes hard disk 2 nondestructively. The contents of hard disk 2 are deleted if the size of the disk is decreased.
If you do not allocate memory, data storage, and CPU resources sufficient for the maximum number of endpoints, you trigger the Virtual Machine Configuration alarm, Raise Alarm When Virtual Machine is Detected to be Underprovisioned. This alarm displays a specific message for each underprovisioned resource.
VM resource
Alarm message
Memory
Not enough memory (available = X MB, required = X MB)
Storage
Not enough disk2 storage (available = X MB, required = X MB)
CPU
Not enough cumulative CPU (available = X MHz, required = X MHz)
For example you might receive the following alarm message:
Not enough cumulative CPU (available = 1861.260000 MHz, required = 2000.000000 MHz)
For information on Core-v alarms, see the SteelFusion Design Guide.
Recommendations for optimal performance
VM configuration is central to Core-v performance. Follow these tips for best results:
•  Use a Gigabit link for the auxiliary interface - For optimal performance, connect the auxiliary virtual interfaces to physical interfaces that have a capacity of at least 1 Gbps.
•  Do not share physical NICs - Assign a physical NIC to a single auxiliary interface. Do not share physical NICs destined for other virtual interfaces with other VMs running on the ESX host; otherwise, bottlenecks might result.
•  Always reserve virtual CPU cycles - For best performance, it is important that Core-v receives sufficient CPU resources. This resource requirement can be enforced by reserving the number of virtual CPUs that the model is supposed to function with. Also remember to reserve the number of clock cycles in terms of CPU MHz; for example, for a model VGC-1000-UL installed on a quad core (Xeon-based system running at 2.6 GHz), you would reserve two vCPUs and 2 x 2.6 GHz CPU cycles using vSphere.
•  Do not over-provision the physical CPUs - Do not run more VMs than there are CPUs. For example, if an ESX host is running on a quad-core CPU, all the VMs on the host should use not more than four virtual CPUs.
•  Use a server-grade CPU for the ESX host - We recommend Xeon or Opteron.
•  Always reserve RAM - Memory is another very important factor in determining Virtual Core performance. For details, see Hardware requirements for VGC-1000 and VGC-1500.
•  Do not over-provision physical RAM - The total virtual RAM needed by all running VMs should not be greater than the physical RAM on the system.
•  Do not use low-quality storage - Make sure that the Virtual Core disk used for the Virtual Machine Disk (VMDK) is located on a physical disk medium that supports a high number of I/O operations per second (IOPS). For example, use NAS, storage array, or dedicated SATA disks.
Installing and configuring Core-v for ESXi
This section provides an overview of the basic steps to install and configure Core-v for ESXi, followed by detailed procedures.
Task
Reference
1. Verify your hardware requirements, to ensure that the hardware you have set aside is sufficient to run Core-v for ESX.
2. Obtain the Core-v package from Riverbed Support.
3. Gather network settings for the configuration wizard.
4. Deploy the Core-v image.
5. (NFS mode only) Change the storage mode to NFS.
6. Power on the VM, start Core-v, and log in.
7. Complete the Core-v configuration.
8. Exit the configuration wizard.
 
9. Purchase a token from Riverbed Sales.
10. Refer to the SteelFusion Core Management Console User’s Guide for configuration specifics and other Riverbed product documentation for additional information.
SteelFusion Command-Line Interface Reference Manual
Riverbed Command-Line Interface Reference Manual
SteelFusion Design Guide
Obtaining the Core-v VM image
Core-v is provided by Riverbed as an image that contains the VMX and VMDK files necessary to create the VM.
The Core-v image is an installable open virtual appliance (OVA) package. OVA is a platform-independent, efficient, extensible, and open packaging distribution format. The OVA package provides a complete specification for Core-v for ESX, including its required virtual disks, CPU, memory, networking, and storage. The following OVA packages are available:
•  image.ova for the VGC-1000 series
•  image-vgc.ova for the VGC-1500 series
Note: Model upgrades from a VGC-1000 series to a VGC-1500 series are not supported through licensing. You must deploy the correct OVA package.
The OVA file is a compressed .tar package that quickly creates a VM with predefined settings. It contains the following files:
•  OVF file - The XML description of Core-v.
•  VMDK file - Contains the management system.
•  Manifest file - The checksum of the OVF and VMDK.
•  VMX file - The primary configuration that is created when the OVA is deployed.
To obtain the OVA package, log in to your customer account at https://support.riverbed.com.
Gathering information
Before you begin, read the release notes for the product at https://support.riverbed.com. They contain important information about this release. Next, gather the following information:
•  Hostname
•  Domain name
•  IP address
•  DNS server
•  Interface IP addresses
•  Netmask
•  Default gateway
Opening the vSphere client
Each package contains a predefined virtual hardware configuration for Core-v. Do not open or modify any of the files in the package. The package files require approximately 25 GB of disk space.
Installation procedures vary depending on whether you are using the VMware VI or vSphere client. The examples in this document are created using vSphere.
Core-v on ESX is provided by Riverbed as an OVA file on 4.1 and newer systems.
Deploying the OVF template
This section describes how to install and configure the default Core-v on a VMware ESXi host using the vSphere client.
The standard installation puts both VMDKs in a single local storage location. The local storage holds the VM files and is referred to as a datastore during OVF deployment, but it is not used for the RiOS datastore, which is used for network optimization.
Make sure the local storage datastore you select has enough capacity for the OVA package to be installed. You need at least 25 GB. The larger VMDK containing the management system can be installed on any datastore type. The smaller VMDK contains the Core-v statistics. The datastore must have enough room to expand to the required size of Core-v. Do not share host physical disks (such as SCSI or SATA disks) between VMs.
To deploy the OVA template
1. Open VMware vSphere, type the hostname IP address or name, type your username and password, and click Login.
2. Choose File > Deploy OVF template.
3. Click Deploy from file, and then click Browse.
4. Select the OVA file (filename ending in .ova), and click Open.
5. Click Next.
6. Verify that the OVA file is the one you want to deploy.
7. Click Next.
8. Type a name for the VM.
9. Click Next.
10. Select Thick provisioned format unless you have a specific reason for requiring thin provisioning.
Note: We recommend thick provisioning. In some cases, thin provisioning could impact application performance. If ESXi storage becomes full, the Core appliance could crash.
11. Click Next.
12. Select the destination network name and select a network from the drop-down list to map the source network to a destination network.
The primary and ETH interfaces are used for data connection, while AUX is used primarily for management. The physical ESXi interfaces that you are connecting to should have GigE capability.
13. Click Next.
14. Verify the deployment settings and click Finish.
A message shows the amount of time before the deployment is finished. When the deployment finishes, a message tells you the deployment was successful. You can edit disk size and provisioning settings later by right-clicking the name of your VM.
15. Click Close.
The new VM appears under the hostname or host IP address to the VM inventory.
Setting the storage mode to NFS
This section describes how to set the storage mode to NFS/file after you have deployed the Core-v VM.
To set the storage mode to NFS/file
1. Power on the Core-v VM from the VMware vSphere console.
2. Log in to the Core-v VM:
Login: admin
Password: password
3. Type yes to run the Configuration Wizard, and follow the steps to specify a hostname, IP settings, and a password.
4. Press Enter to save the configuration changes.
5. At the prompt, type enable.
6. At the next prompt, type configure terminal.
7. Enter the service reset set-mode-file command.
A message notifies you that the service is changing to a different storage mode.
8. Enter the service reset set-mode-file confirm command.
The system reboots into the operating mode that supports your SFNFS license type.
9. Log in using your new credentials.
10. Enter the show service command to verify that the Core-v is running the SteelFusion Core File Service.
The Core service may be stopped or running, depending on the status of your licenses. Proceed with the licensing instructions to continue setting up the appliance.
Logging in to Core-v
This section describes how to log in to Core-v.
You can connect to Core-v through any supported web browser. To connect to Core-v you must know the host, domain, and administrator password that you assigned during the initial setup.
Note: Cookies and JavaScript must be enabled in your browser.
To log in to Core-v
1. Enter the URL for Core-v in the location box of your browser:
<protocol>://<host>.<domain>
Where:
–  <protocol> is http or https. The secure HTTPS uses the SSL protocol to ensure a secure environment. If you use HTTPS to connect, you are prompted to inspect and verify the SSL key.
–  <host> is the IP address or hostname you assigned to Core-v during the initial configuration. If your DNS server maps the IP address to a name, you can specify the DNS name.
–  <domain> is the full domain name for Core-v.
The Core-v interface appears, displaying the Sign In page.
2. In the Username text box, the default account admin appears.
You must specify the account admin when you first log in.
Optionally, at a later time, you can configure the monitor username, RADIUS users, or TACACS+ users. For detailed information, see the SteelFusion Core Management Console User’s Guide.
3. In the Password text box, type the password you assigned in the Virtual Core configuration wizard.
4. Click Log In to log in and display the Home page.
Purchasing the token and receiving the licenses
Before you can add licenses to a Core-v, you must first purchase a token from Riverbed. The token has a model number assigned to the new Core-v after you complete its licensing. To view your purchased token, log in to your account at https://support.riverbed.com.
Note: For details about licensing, see Managing Riverbed Licenses.
After you receive a token, you are ready to install the licenses.
To activate the token and install the license
1. Log in to Core-v.
2. Choose Settings > Maintenance: Licenses to display the Licenses page.
3. Under License Request, type the token and click Generate License Request Key.
When you enter the token, RiOS returns a license request key.
4. After you have obtained the license request key, go to the Riverbed Licensing Portal at http://licensing.riverbed.com (nonregistered users) or to the Riverbed Support site at https://support.riverbed.com/content/support/my_riverbed/tokens.html (registered users) to generate your license keys.
The license keys include the GCBASE license as well as any other licenses needed for Virtual Core.
The Licensing Portal is a public website; the Riverbed Support website requires registration.
After your licenses are generated, they appear online and are also emailed to you for reference.
5. Copy and paste the license key into the text box. Separate multiple license keys with a Space, Tab, or Enter.
6. Click Add License(s).
Core-v’s status should change to Healthy, indicated in green, a few seconds after you add the GCBASE and Model GCMSPECV licenses.
7. Click Config Save Required, next to the Healthy status indicator, to save your configuration.
Note: If you intend to use Virtual Core in NFS/file mode, ensure that you have set the storage operating mode accordingly. For details, see Setting the storage mode to NFS.