Multiple NetX Server Instances

Multiple NetX server instances and a network load balancer (NLB) can be used to provide failover capabilities, massive scalability, and resource control. 
 

FunctionDescription
High Availability

Deploy a primary and a secondary instance of NetX. While the NLB routes traffic to the primary instance, it monitors heartbeat. If it detects an outage, the NLB automatically starts routing traffic to the secondary NetX instance, until the primary is restored. 

Scalability

Deploy any number of NetX instances and direct tra ff ic in a round- robin sequence using a NLB to distribute the load across all available instances. The NLB can automatically removes tra ffic from an instance that is either too slow or is otherwise unavailable. Scaling horizontally as your tra ff ic needs dictate, provides for massive scalability of the DAM.

Server Resource Management

Route specific user populations to specific NetX instances to e ff iciently control and manage underlying server resources. For example, two NetX instances can be dedicated for general usage, while a third instance is dedicated exclusively for ingesting new assets into the cluster. 

How multiple NetX instances work with a Network Load Balancer

In a high-availability environment, traffic is routed to multiple instances of NetX via a Network Load Balancer. When using NetX in a multi-purpose capacity (a node for import and a node for downloading) each node will have it's own DNS hostname. The only requirement is that the NLB provide for “sticky” sessions; each user must be routed to a specific instance. The actual “failover” responsibility rests with the NLB. Either configuration option — for failover or for scalability — will require a configuration (and functionality) for “heartbeat” monitoring and automatic failover routing to the secondary server(s).

Synchronizing multiple NetX instances

Multiple NetX instances can share a central database and repository for storing assets. NetX manages concurrency, collisions, and caching management between the instances. The underlying messaging system is robust and threaded. Message times can be configured to push updates across all instances within minutes, ensuring that data is quickly and reliably available to all users.

Requirements

  • If you'd like to purchase additional server instances to run in a distributed configuration, please contact your account manager.
  • For high availability: a third-party network load balancer that supports “sticky” sessions. 


Configuration

This guide assumes familiarity with standard NetX installation components. Please reference our installation guides or contact your account manager for assistance.

1. Deploy additional instances

After the primary NetX application instance is created, a secondary instance needs to be deployed. This instance can either exist on the same server or on a separate server. The NetX configuration files ( exogen-config* ) from  {application root dir}/netx/config  need to be copied over from the primary node to the secondary node, and so forth.

2. Manage the URLs of your instances

You will need to update the  sys.site_domain, sys.docroot_url,  and  emailTpl.shareLinkURL  properties in ${application root dir}/netx /config/exogen-config.xml . The  sys.docroot_url  and sys.site_domain  properties should be set to the URL of the individual node (e.g., http://appserver1.mydomain.com ). The  emailTpl.shareLinkURL  property is used for writing email links, so it needs to be set to the URL of the load balancer (e.g., http://netx.mydomain.com )

3. Repository share

The repository, if mounted, needs to be in the same location on the filesystem. If it’s on a different server, it needs to be mounted on both servers in the same location. For example, if it is on /mnt/nfs/repository on the primary server, it needs to exist on /mnt/nfs/repository on the secondary server.

4. Application file share

Similar to the repository, the "appFiles" directory needs to also exist. Make sure  image.fileDirectory  property points to a network mounted location. and set them accordingly on both instances (e.g., /mnt/nfs/appFiles).

5. Common database

The  database.url database.user  and  database.password  properties need to point to the same database or a synchronized/clustered database URL in which the URL may deviate between nodes.

6. NetX properties

Both instances will need the property   hydra.dataManagerEnabled=true  in  ${application root dir}/netx /config/exogen-config.xml.

Please be aware that application configuration changes made on a single node will not automatically sync to other nodes. Any changes to preferences or properties made in the NetX UI (or on a single node’s exogen-config.xml) will need to be replicated manually to other nodes in the cluster. Please use caution to avoid inconsistent application behavior for users.

7. Tomcat configuration changes

If you are deploying on the same server, make sure to create unique $NETX_ID variables for each instance in  {application root dir}/bin/setenv.sh  . Additionally, each http port must be configured in  {application root dir}/conf/server.xml.

Upgrade procedure

It's important to note that when upgrading multiple NetX instances, all nodes should first be shut down. Each node should be upgraded in succession. Only once the first node has started up completely should the other nodes be started. This is because the application may be in the middle of database schema changes that need to complete before other nodes start using the database.

Load balancers

  • The Network Load Balancer must support "sticky" sessions. This can be done via IP address or via a cookie or header that is tracked by the load balancer.
  • IP address load balancing is not recommended in scenarios where there is an office behind NAT and the users are all coming from a single source IP address.
  • The tomcat session cookie can also be used for load balancing.
  • SSL termination must be done at the load balancer.

Session persistence

Setting the user.sessionPersistence property to true will allow session keys to be shared between nodes. However, be aware this does not (yet) transfer session attributes between nodes.