A picture containing text, lit, light

Description automatically generated

 

 

 

SteelCentral™ NetIM Virtual Edition Installation Guide

Virtual Edition for VMware ESXi 6.5, ESXi 6.7, ESXi 7.0

 

Version: 2.4.0

Release Date: September 1, 2021

Revised: September 28, 2021

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


© 2021 Riverbed Technology. All rights reserved.

Riverbed®, SteelApp™, SteelCentral™, SteelFusion™, SteelHead™, SteelScript™, SteelStore™, Steelhead®, Cloud Steelhead®, Virtual Steelhead®, Granite™, Interceptor®, Stingray™, Whitewater®, WWOS™, RiOS®, Think Fast®, AirPcap®, BlockStream™, FlyScript™, SkipWare®, TrafficScript®, TurboCap®, WinPcap®, Mazu®, OPNET®, and Cascade® are all trademarks or registered trademarks of Riverbed Technology, Inc. (Riverbed) in the United States and other countries. Riverbed and any Riverbed product or service name or logo used herein are trademarks of Riverbed. All other trademarks used herein belong to their respective owners. The trademarks and logos displayed herein cannot be used without the prior written consent of Riverbed or their respective owners.

Portions of SteelCentral™ products contain copyrighted information of third parties. Individual license agreements can be viewed in the NetIM VM web user interface at Help  > Legal Notices.

This documentation is furnished “AS IS” and is subject to change without notice and should not be construed as a commitment by Riverbed Technology. This documentation may not be copied, modified or distributed without the express authorization of Riverbed Technology and may be used only in connection with Riverbed products and services. Use, duplication, reproduction, release, modification, disclosure or transfer of this documentation is restricted in accordance with the Federal Acquisition Regulations as applied to civilian agencies and the Defense Federal Acquisition Regulation Supplement as applied to military agencies. This documentation qualifies as “commercial computer software documentation” and any use by the government shall be governed solely by these terms. All other use is prohibited. Riverbed Technology assumes no responsibility or liability for any errors or inaccuracies that may appear in this documentation.

This manual is for informational purposes only. Addresses shown in screen captures were generated by simulation software and are for illustrative purposes only. They are not intended to represent any real traffic or any registered IP or MAC addresses.

 

 

 


Contents

About this guide.. 4

Preparing to deploy NetIM Virtual Edition.. 4

Deployment Requirements. 4

Deployment Guidelines. 5

Deployment Guidelines Table. 5

Access to Network. 6

Deploying NetIM Virtual Edition.. 8

Configuring NetIM Virtual Edition.. 9

Setting up the NetIM Manager. 9

(Optional) Setting up NetIM Data Manager(s). 10

Setting up NetIM Worker(s). 10

Completing the Swarm Configuration - Starting Microservices. 11

Setting up the NetIM Core. 11

Signing in to the NetIM VM web user interface. 12

Post Deployment VM Adjustments for Scalability.. 12

Increasing VM Disk Space. 12

Increasing VM Memory. 13

Increasing vCPUs. 13

Scaling NetIM with Additional Data Managers. 13

Scaling NetIM with Additional Workers. 13

Troubleshooting. 14

Contacting Riverbed.. 14

 


About this guide

 

Riverbed® SteelCentral™ NetIM is an integrated solution for mapping, monitoring, and troubleshooting your infrastructure components.  With NetIM you can capture infrastructure topology information, detect performance issues, map application network paths, diagram your network, and troubleshoot infrastructure problems. Additionally, you can manage infrastructure issues within the context of application, network, and end-user experience for a blended view of overall performance.

 

NetIM provides agentless infrastructure component monitoring to deliver a comprehensive picture of how your infrastructure is affecting network and application performance and how that impacts end-user experience. NetIM offers a broad overview of how the devices on your network are performing to complement your network and application performance management visibility.  NetIM includes:

·        Real-Time Monitoring – NetIM leverages multiple approaches (e.g., synthetic testing, SNMP, CLI) to identify new and changed infrastructure components.

·        Analytics – Measure current performance and identify violations.

·        Topology – Visualize and triage issues quickly.

·        Troubleshooting – Search-based workflows and on-demand network paths help you to quickly identify and troubleshoot infrastructure performance issues.

·        Event and Alert Visibility – View, filter and drill-down into internal alerts and syslog and traps occurring within your enterprise.

·        Reporting – Report on infrastructure inventory and performance metrics.

 

This guide details the steps to deploy NetIM on a VMware ESXi host.

 

Preparing to deploy NetIM Virtual Edition

Deployment Requirements

·        One or more ESXi servers (geographically collocated in the same datacenter) running ESXi 6.5, 6.7, or 7.0 with resources to minimally support:

o   3 or more virtual machines with 4 virtual CPUs, 16 GB of RAM per VM, 175 GB of storage per VM (75 GB for OS partition; 100 GB for Application & Persistence)

·        VM Snapshots are known to affect performance but can be enabled with appropriate caution.

o   VM Snapshots are supported while NetIM services are down

o   VM Snapshots are supported while NetIM services are up provided the underlying storage is SSD-based, a hierarchy of snapshots is not present, memory is not included in the snapshot, and the snapshot can be completed in 7 seconds or less

o   Note: No adverse effects from snapshots were observed in internal testing using higher-performance VM infrastructure and the above recommendations, hence snapshots are supported with the following limitations:

§  Riverbed Global Support reserves the right to ask you to disable snapshots if snapshots are suspected of causing instability in your NetIM implementation.

·        vMotion can be enabled with appropriate caution

o   Note: No adverse effects from vMotion were observed in internal testing using higher-performance VM infrastructure, hence vMotion is supported with the following limitations:

§  Riverbed Global Support reserves the right to ask you to disable vMotion if vMotion is suspected of causing instability in your NetIM implementation.

·        NetIM Manager and Data Managers must be allocated Enterprise-class high-performance storage (Premium SSD, Standard/General Purpose SSD)

o   Sustained throughput per physical host: 8,000 IOPS / 800 MBps

o   Average Response Time: 2-4 ms

·        NetIM Worker and NetIM Core must be allocated Enterprise-class storage (Standard/General Purpose SSD, SSD Accelerated)

o   Sustained throughput per physical host: 6,000 IOPS / 600 MBps

o   Average Response Time: 3-5 ms

·        IP addresses for the virtual machines must be statically assigned or permanently reserved (DHCP reservation)

·        By default, NetIM internals uses IPv4 addresses in the following subnets:

o   10.255.0.0/16

o   10.50.0.0/16

o   10.60.0.0/16

o   172.17.0.0/16

o   172.18.0.0/16

During setup, NetIM will modify subnet usage if it suspects or detects one of the above subnets is in use in your network.  NetIM setup’s Advanced Docker configuration step will allow you to view and manually select alternative subnets.  Subnets used by NetIM internal should not be routable subnets in your enterprise network.

·        NetIM Core should be provisioned with a network interface capable of supporting the throughput required for CLI/SNMP collection and trap/syslog receiving

·        NetIM Worker(s) should be provisioned with a network interface capable of supporting the throughput required to support polling and alert notification

·        Access to a Network Time Protocol Server is required for time-synchronization between components

·        Web browser – Chrome 90 (and above), Firefox 90 (and above), Edge 90 (and above), or Safari 13 (and above)

 

Other Important Deployment Considerations:  

·        Due to latency, deployment across geographically dispersed infrastructure is not supported.

·        All virtual infrastructure on which NetIM is deployed must be in the same data center.   

·        On-line backups and VM snapshots may affect performance and stability and should be disabled    in some circumstances, see above.

 

Deployment Guidelines

 

Actual deployment requirements depend on your licensed polling limits. For ESXi hosts that are provisioned for running NetIM, the recommended minimum requirements for each component are provided in the Deployment Guidelines Table below.

 

Deployment Guidelines Table

 

Devices (*)

Interface Polling (*)

NetIM Manager (**)

NetIM Data Manager (**)

NetIM Worker

NetIM Core

Managers

Data Managers (***)

Workers

(****)

Core

2.5K

50K

4 vCPUs

16 GB Memory

75 GB Storage (OS)

1 TB Storage (App)

N/A

4 vCPUs

16 GB Memory

75 GB Storage (OS)

100 GB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

100 GB Storage (App)

1

0

1

1

5K

100K

4 vCPUs

16 GB Memory

75 GB Storage (OS)

2 TB Storage (App)

N/A

4 vCPUs

16 GB Memory

75 GB Storage (OS)

100 GB Storage (App)

6 vCPUs

32 GB Memory

75 GB Storage (OS)

200 GB Storage (App)

1

0

2

1

10K

200K

6 vCPUs

24 GB Memory

75 GB Storage (OS)

2 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

2 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

100 GB Storage (App)

8 vCPUs

48 GB Memory

75 GB Storage (OS)

250 GB Storage (App)

1

1

4

1

15K

300K

8 vCPUs

32 GB Memory

75 GB Storage (OS)

2 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

2 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

100 GB Storage (App)

8 vCPUs

64 GB Memory

75 GB Storage (OS)

300 GB Storage (App)

1

1

6

1

20K

400K

8 vCPUs

40 GB Memory

75 GB Storage (OS)

3 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

3 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

100 GB Storage (App)

8 vCPUs

80 GB Memory

75 GB Storage (OS)

350 GB Storage (App)

1

2

8

1

25K

500K

10 vCPUs

48 GB Memory

75 GB Storage (OS)

3 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

3 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

100 GB Storage (App)

8 vCPUs

96 GB Memory

75 GB Storage (OS)

400 GB Storage (App)

1

2

10

1

30K

600K

10 vCPUs

56 GB Memory

75 GB Storage (OS)

3 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

3 TB Storage (App)

4 vCPUs

16 GB Memory

75 GB Storage (OS)

100 GB Storage (App)

8 vCPUs

112 GB Memory

75 GB Storage (OS)

450 GB Storage (App)

1

3

12

1

(* Assuming 5-minute polling and minimal latency between workers and polled elementsIf CoS metrics are polled, each CoS definition applied to a polled interface counts as an additional logical interface, thereby increasing the overall polled interface count)

(** Manager & Data Manager Application Storage requirements are approximate and primarily dependent on metric retention and roll-up settings.  For proof-of-concepts, the default App Storage of 100GB may be sufficient if metric retention settings are reduced from the system defaults.)

(*** Write and query performance may improve with additional Data Manager nodes.)

(**** Use the “scale” command on the Manager to scale the poller, alerting, and thresholding services to the number of Workers. See Scaling NetIM)

 

Access to Network

Ensure that the following ports are open:

 

Component

 

Outbound Ports

Inbound Ports

NetIM Core

 

TCP/22 (ssh)

TCP/22 (ssh)

 

 

 

TCP/9190 (http-web interface)

 

 

 

TCP/3100 (LUS API Clients)

 

 

 

TCP/8543 (https-web interface)

 

 

 

TCP/3389 (RDP)

 

 

UDP/8162 (SNMP Trap) KBA S33800

 

 

UDP/8514 (Syslog) KBA S33800

 

 

TCP/25 or other (SMTP)

 

 

 

UDP/161 (SNMP)

 

 

 

 TCP/3162 (Test Engine Controller)

 

 

TCP/9191 API Gateway

 

 

 

TCP/23 (Telnet)

 

 

 

UDP/123 (NTP)

 

 

 

 

TCP/9347 (Portal DCL)

 

 

 

TCP/8085 (cAdviser)

 

 

 

TCP/9001 (portainer)

Manager, Data Manager(s), & Worker(s)

 

UDP/123 (NTP)

 

 

TCP-UDP/7946 (Docker)

TCP-UDP/7946 (Docker)

 

 

TCP-UDP/4789 (Docker)

TCP-UDP/4789 (Docker)

 

 

TCP/2377 (Docker)

TCP/2377 (Docker)

 

 

TCP/22 (SSH)

TCP/22 (SSH)

 

 

 

TCP/3389 (RDP)

 

 

UDP/162 (SNMP Trap)

 

 

UDP/514 (Syslog)

 

 

 

TCP/9100 (portainer)

 

 

 

TCP/9001 (portainer)

 

 

 

TCP/8901 (Job Service Monitor)

 

 

 

TCP/8919 (Service Monitor

 

 

 

TCP/9000 (Kafka Manager)

 

 

 

8088 (redis commander)

 

 

TCP/80 (Internet Hosts)

TCP/80 (PgAdmin4)

 

 

 

TCP/3000 (Grafana)

 

 

 

TCP/8085 (cAdviser)

 

 

 

TCP/5100 (Elastic HQ)

 

 

 

TCP/9143 (API Gateway)

 

 

TCP/443 (Internet Hosts)

 

 

 

TCP/25 or other (SMTP)

 

 

UDP/161 (SNMP)

 

 

Slack Custom Webhook port

 

 


 

Deploying NetIM Virtual Edition

Deploying the NetIM OVA packages to ESXi 6.5, 6.7, or 7.0 servers

Install the NetIM software on the ESXi server(s), as follows:

1.      Copy the 2 OVA packages (netim_core_2XX_XXX.ova and netim_microservices_2XX_XXX.ova to your local system. 

·        You will create a single NetIM manager from the microservices OVA.

·        You will create one or more NetIM workers from the microservices OVA.

·        You will create zero or more NetIM data managers from the microservices OVA

·        You will also create a single NetIM core from the netim_core OVA.

2.      Using a supported browser and your ESXi host login credentials, login to one or more ESXi servers that will host the NetIM VMs. 

·        Note:  The OVAs do not need to be deployed to the same ESXi server.  However, the VMs created must be able to communicate over all required ports.

3.      Click Create/Register VM or select Actions->Create/Register VM.

4.      Click Deploy a virtual machine from an OVF or OVA file, click Next.

5.      Enter a name for the VM (i.e. netim-manager) and drag-and drop or browse to select the NetIM microservices OVA file, click Next.

6.      On the Select storage page, select the server drive where you will store the VM files, click Next.

·        Note: Enterprise-class, high-performance storage must be allocated for the NetIM Manager

7.      On the Deployment options page, set Disk provisioning to Thick, click Next.

8.      On the Ready to complete summary page click Finish to start the deployment of the NetIM manager VM.  

9.      (Optional, See Deployment Table) Click Create/Register VM or select Actions->Create/Register VM.

10.  (Optional, See Deployment Table) Click Deploy a virtual machine from an OVF or OVA file, click Next.

11.  (Optional, See Deployment Table) Enter a name for the VM (i.e. netim-data-manager) and drag-and drop or browse to select the NetIM microservices OVA file, click Next.

12.  (Optional, See Deployment Table) On the Select storage page, select the server drive where you will store the VM files, click Next.

·        Note: Enterprise-class, high-performance storage must be allocated for the NetIM Data Manager

13.  (Optional, See Deployment Table) On the Deployment options page, set Disk provisioning to Thick, click Next.

14.  (Optional, See Deployment Table) On the Ready to complete summary page click Finish to start the deployment of the NetIM Data manager VM.

15.  Click Create/Register VM or select Actions->Create/Register VM.

16.  Click Deploy a virtual machine from an OVF or OVA file, click Next.

17.  Enter a name for the VM (i.e. netim-worker) and drag-and drop or browse to select the NetIM microservices OVA file, click Next.

18.  On the Select storage page, select the server drive where you will store the VM files, click Next.

19.  On the Deployment Options page, set Disk provisioning to Thick, click Next.

20.  On the Ready to complete summary page click Finish to start the deployment of the NetIM worker VM.

21.  Click Create/Register VM or select Actions->Create/Register VM.

22.  Click Deploy a virtual machine from an OVF or OVA file, click Next.

23.  Enter a name for the VM (i.e. netim-core) and drag-and drop or browse to select the NetIM core OVA file, click Next.

24.   On the Select storage screen, select the server drive where you will store the VM files, click Next.

25.   On the Deployment options page, set Disk provisioning to Thick, click Next.

26.  On the Ready to complete summary page click Finish to start the deployment. 

27.  When the deployment of the three (or more) VMs has completed, you can see the resulting network structure on the Networking configuration page.

 

Configuring NetIM Virtual Edition

The initial configuration sets up the NetIM components such that they are associated and can communicate with each other.  You perform this configuration through the VM consoles.  After configuration and startup of all NetIM components, the NetIM web UI will be accessible by a web browser for licensing.

Setting up the NetIM Manager

1.      If it is not already powered on, power on the NetIM Manager VM.

2.      Click the console to launch the VM console.

3.      At the netim-appliance login: prompt, enter the default username and password.

login: netimadmin

password: netimadmin

4.      The initial setup wizard will automatically launch.  The setup wizard guides you through the initial configuration of the appliance. Press Enter at any step to accept the current setting “[ ]” and move to the next step.  You will be asked to provide the:

a.      Hostname,

b.      IPv4 and IPv6 network configuration (IPv4 is required. IPv6 is optional.),

Note:  IP address must be statically assigned or permanently reserved through DHCP reservation

c.      Role of the host in the swarm (enter 1 for manager)

d.      Network time protocol (NTP) server,

e.      Time zone.

5.      Enter a new password for the netimadmin, if you desire.

6.      If you used DHCP to provision an IP address for the NetIM manager, enter show network to display the network configuration including IP address.  You will use the manager’s IPv4 address (or the DNS name) to connect the worker to the manager in the next phase.

7.      [Optional] The Advanced Docker Configuration step allows you to review and change the IPv4 subnets that will be configured for internal NetIM communication.

(Optional) Setting up NetIM Data Manager(s)

1.      If it is not already powered on, power on the NetIM Data Manager VM.

2.      Click the console to launch the VM console.

3.      At the netim-appliance login: prompt, enter the default username and password.

login: netimadmin

password: netimadmin

4.      The initial setup wizard will automatically launch.  The setup wizard guides you through the initial configuration of the appliance. Press Enter at any step to accept the current setting “[ ]” and move to the next step.  You will be asked to provide the:

a.      Hostname,

b.      IPv4 and IPv6 network configuration (IPv4 is required. IPv6 is optional.),

Note:  IP address must be statically assigned or permanently reserved through DHCP reservation

c.      Role of the host in the swarm (enter 3 for data manager)

5.      When prompted, join the data manager to the manager by entering the manager’s IPv4 address or DNS name and password. 

6.      Enter a new password for the netimadmin, if you desire.

7.      [Optional] The Advanced Docker Configuration step allows you to review and change the IPv4 subnets that will be configured for internal NetIM communication.

8.      The console will indicate that the data manager is joined to the swarm and the docker services will restart.

a.      Note:  To confirm that the data manager joined the swarm successfully, run “docker node ls” on the console of the manager.

 

Setting up NetIM Worker(s)

1.      If it is not already powered on, power on the NetIM Worker VM.

2.      Click the console to launch the VM console.

3.      At the netim-appliance login: prompt, enter the default username and password.

login: netimadmin

password: netimadmin

4.      The initial setup wizard will automatically launch.  The setup wizard guides you through the initial configuration of the appliance. Press Enter at any step to accept the current setting “[ ]” and move to the next step.  You will be asked to provide the:

a.      Hostname,

b.      IPv4 and IPv6 network configuration (IPv4 is required. IPv6 is optional.),

Note:  IP address must be statically assigned or permanently reserved through DHCP reservation

c.      Role of the host in the swarm (enter 2 for worker)

5.      When prompted, join the worker to the manager by entering the manager’s IPv4 address or DNS name and password. 

6.      Enter a new password for the netimadmin, if you desire.

7.      [Optional] The Advanced Docker Configuration step allows you to review and change the IPv4 subnets that will be configured for internal NetIM communication.

8.      The console will indicate that the worker is joined to the swarm and the docker services will restart.

a.      Note:  To confirm that the data manager joined the swarm successfully, run “docker node ls” on the console of the manager.

 

Completing the Swarm Configuration - Starting Microservices

1.      Return to the VM console of the NetIM Manager

2.      Enter “docker node ls” in the console to confirm that the worker appears as a node in the docker swarm listing.

3.      Enter “start all” to start all services including the common, tenant and monitoring stacks.  This process will initially take about 5 minutes to complete.

4.      The name and ID of the tenant stack is included in the output of “start all” and is required to be entered in the setup of NetIM Core.  The tenant information will usually be:

a.      Name:  default_tenant

b.      ID: 1

 

Setting up the NetIM Core

1.      If it is not already powered on, power on the NetIM Core VM.

2.      Click the console to launch the VM console.

3.      At the netim-appliance login: prompt, enter the default username and password.

login: netimadmin

password: netimadmin

4.      The initial setup wizard will automatically launch.  The setup wizard guides you through the initial configuration of the appliance. Press Enter at any step to accept the current setting “[ ]” and move to the next step.  You will be asked to provide the:

a.      Hostname,

b.      IPv4 and IPv6 network configuration (IPv4 is required. IPv6 is optional.),

Note:  IP address must be statically assigned or permanently reserved through DHCP reservation

5.      When prompted, join the core to the manager by entering the manager’s IPv4 address or DNS name and password.

6.      Enter a new password for the netimadmin, if you desire.

7.       [Optional] The Advanced Docker Configuration step allows you to review and change the IPv4 subnets that will be configured for internal NetIM communication.

8.      When prompted, enter the tenant ID (usually 1) that was setup when “start all” was run on the manager (or run “show tenants” on the manager console to display the tenant ID).

9.      You can now login to NetIM by pointing your browser to https://<netim-core-hostname or IP address>:8543 and complete licensing and configuration operations.

Signing in to the NetIM VM web user interface

The web user interface is the primary means of access to NetIM.  You use it for further configuration and operation of the NetIM solution.  Connect to NetIM through the web user interface using your web browser. (Note: Make sure that SSL, cookies, and JavaScript are enabled in your browser.)

1.      Point your browser to https://<netim-core-hostname or IP address>:8543.

2.      Enter username and password, then click the login button. (Default value is ‘admin’ for both username and password

 

Post Deployment VM Adjustments for Scalability

NetIM OVA resource reservations are minimal.  For large installations, you will need to adjust the VM configurations for your monitoring environment.  The Deployment Guideline Table provides the recommended VM resource reservations for your manager, data manager(s), worker(s) and core.  VM resource adjustments may include:

1.      Increase the data storage of the NetIM Manager (and Data Manager) to 1 TB or greater as it holds the data storage for metrics.

2.      Increase the memory allotted to NetIM Manager and NetIM Core via the ESXi UI.

3.      Increase the virtual CPUs allotted to NetIM Manager and NetIM Core via the ESXi UI.

4.      Increase the data storage of the NetIM Core to accommodate larger infrastructure models.

 

Increasing VM Disk Space

1.      In the ESXi hypervisor, select the VM and then click Shut down to power down the VM.

2.      Click Edit to edit the VM settings.

3.      Adjust the Hard Disk 2 space allotted for the VM (Hard Disk 1 contains the OS and should not require you to increase allotted storage).

4.      Click Power on to start the VM; then login to the VM console or SSH to the VM console as netimadmin.

5.      A change in the allocated disk size will be auto detected during boot time.  The partition will be automatically resized and another reboot initiated.

6.      Verify the expanded logical volume by entering shell df -Bg and reviewing the output for /data1.

Increasing VM Memory

1.      In the ESXi hypervisor, select the VM and then click Shut down to power down the VM.

2.      Click Edit to edit the VM settings.

3.      Adjust the Memory allotted for the VM.

4.      Click Power on to start the VM; then open the VM console or SSH to the VM console

Note:

·        The docker containers automatically scale when the memory is increased. However, the java processes inside the containers may not scale automatically.

o   For NetIM Core, memory allocated to NetIM Core services can be adjusted by using the memory slider on the Server Status page.

5.      After increasing physical memory, virtual memory (swap) size can be increased on the VMs by using the shell command "system swap size auto"

 

Increasing vCPUs

1.      In the ESXi hypervisor, select the VM and then click Shut down to power down the VM.

2.      Click Edit to edit the VM settings.

3.      Adjust the vCPUs allotted for the VM.

4.      Click Power on to start the VM.

 

Scaling NetIM with Additional Data Managers

You should always try to plan your NetIM deployment such that you allocate the recommended number of data managers for the scale of your deployment.  When you add data managers to the swarm, metric persistence will automatically scale and balance across the manager and data managers.   While it is always possible to add additional data managers after your initial deployment, rebalancing of any existing data persistence across the manager and data managers will incur additional overhead and should be avoided.

 

Scaling NetIM with Additional Workers

You should attempt to plan your NetIM deployment such that you also allocate the recommended number of workers for the expected scale of your deployment.  However, it is less important to accurately plan and allocate the exact number of workers that your deployment may require.  You can add additional workers and load balance across multiple workers in the swarm at any time. It is important to understand that the tenant services are not automatically scaled and load balanced across the workers.  You need to manually scale up certain services in the tenant stack to take advantage of the added workers.  We recommend that you scale up the poller, alerting, and thresholding services when you add additional workers.  You can scale up swarm services by executing the “scale” command on the NetIM Manager VM:

Syntax: scale tenant-stack/<id>  <tenant-service-name>  <number of replicas>

The “scale” command will persist the service scaling across reboots or restarts.  For example, if you deployed 3 workers in your NetIM swarm, we recommend that you scale your poller, alerting and thresholding services to the number of workers by executing the following commands on the NetIM manager:

scale tenant-stack/1 poller 3

scale tenant-stack/1 alerting 3

scale tenant-stack/1 thresholding 3

 

Troubleshooting

Should you need to adjust the internal subnets, you may do so after initial setup by following these steps:

1.      Stop all services on the core and swarm

2.      Run “docker swarm leave -f” on all non-manager nodes

3.      Run “docker swarm leave -f” on the manager

4.      Run setup again on the manager and answer “yes” when prompted if you want to perform Advanced Docker Configuration.

5.      Run setup again on all data manager and worker nodes

6.      Run setup again on core

 

Contacting Riverbed

Options for contacting Riverbed include:

·        Internet - Find out about Riverbed products at http://www.riverbed.com.

·        Support - If you have problems installing, using, or replacing Riverbed products, contact Riverbed Technical Support or your channel partner who provides support. To contact Riverbed Technical Support, please open a trouble ticket at https://support.riverbed.com or call 1-888-RVBD-TAC (1-888-782-3822) in the United States and Canada or +1 415 247 7381 outside the United States.

·        Professional Services - Riverbed has a staff of engineers who can help you with installation, provisioning, network redesign, project management, custom designs, consolidation project design, and custom-coded solutions. To contact Riverbed Professional Services, go to http://www.riverbed.com or email proserve@riverbed.com.

·        Documentation - Riverbed continually strives to improve the quality and usability of its documentation. We appreciate any suggestions you may have about our online documentation or printed materials. Send documentation comments to techpubs@riverbed.com.