In this article

Applies to: windows Server 2022, Azure ridge HCI, version 20H2; home windows Server 2019, windows Server 2016

A failover swarm is a team of independent computers that work together to boost the ease of access of applications and also services. The clustered servers (called nodes) are connected by physical cables and also by software. If one of the swarm nodes fails, another node begins to administer service (a procedure known as failover). Users endure a minimum of disruptions in service. Because that information around using a failover cluster in Azure ridge HCI, see update Azure ridge HCI clusters.

You are watching: You administer a network with windows server 2016 and unix

This guide defines the measures for installing and configuring a basic purpose document server failover cluster that has actually two nodes. By producing the configuration in this guide, you have the right to learn about failover clusters and familiarize yourself with the Failover Cluster administration snap-in user interface in home windows Server 2019 or windows Server 2016.

Overview for a two-node file server cluster

Servers in a failover cluster can role in a range of roles, including the roles of document server, Hyper-V server, or database server, and can provide high access for a variety of various other services and also applications. This guide explains how to configure a two-node file server cluster.

A failover swarm usually has a warehouse unit that is physically linked to every the servers in the cluster, although any type of given volume in the storage is only accessed through one server in ~ a time. The following diagram mirrors a two-node failover cluster associated to a storage unit.


Storage volumes or reasonable unit numbers (LUNs) exposed to the nodes in a cluster should not be exposed to various other servers, consisting of servers in another cluster. The adhering to diagram illustrates this.


Note the for the maximum access of any type of server, that is important to follow best practices for server management—for example, carefully managing the physical atmosphere of the servers, trial and error software changes before fully implementing them, and also carefully maintaining track of software program updates and also configuration transforms on every clustered servers.

The complying with scenario explains how a file server failover cluster deserve to be configured. The records being shared are ~ above the cluster storage, and either clustered server can act as the record server that shares them.

Shared folders in a failover cluster

The adhering to list describes shared folder configuration use that is incorporated into failover clustering:

Display is scoped come clustered common folders only (no mixing v non-clustered common folders): once a user views mutual folders by clues the path of a clustered record server, the screen will incorporate only the shared folders the are part of the specific document server role. It will certainly exclude non-clustered shared folders and shares component of separate record server roles that happen to be on a node the the cluster.

Access-based enumeration: You have the right to use access-based enumeration to hide a stated folder indigenous users" view. Rather of permitting users to see the folder but not access anything on it, friend can pick to avoid them from seeing the folder at all. You deserve to configure access-based enumeration because that a clustered mutual folder in the same means as because that a non-clustered common folder.

Offline access: You can configure offline access (caching) for a clustered shared folder in the same method as because that a nonclustered common folder.

Clustered disks are always recognized as part of the cluster: even if it is you use the failover swarm interface, home windows Explorer, or the Share and Storage monitoring snap-in, windows recognizes whether a disk has been designated as being in the cluster storage. If together a decaying has already been configured in Failover Cluster monitoring as component of a clustered document server, you have the right to then use any kind of of the previously mentioned interfaces to develop a re-publishing on the disk. If together a disk has not been configured as component of a clustered paper server, you can not mistakenly develop a share on it. Instead, one error suggests that the decaying must very first be configured as component of a clustered document server before it have the right to be shared.

Integration of solutions for Network file System: The record Server function in home windows Server has the optional role service referred to as Services because that Network file System (NFS). By installation the role service and configuring mutual folders with solutions for NFS, friend can develop a clustered record server the supports UNIX-based clients.

Requirements because that a two-node failover cluster

For a failover cluster in windows Server 2016 or windows Server 2019 to be thought about an officially supported solution by, the systems must satisfy the following criteria.

The totally configured systems (servers, network, and storage) should pass every tests in the validation wizard, which is part of the failover swarm snap-in.

The following will be necessary for a two-node failover cluster.

Servers: we recommend making use of matching computers with the very same or comparable components. The servers for a two-node failover cluster must run the same version of windows Server. Lock should also have the exact same software update (patches).

See more: Building: 40 West 23Rd Street, New York, Ny 10010, The Home Depot

Network Adapters and cable: The network hardware, like other components in the failover swarm solution, must be compatible through Windows Server 2016 or windows Server 2019. If you usage iSCSI, the network adapters have to be dedicated to one of two people network communication or iSCSI, not both. In the network framework that associate your swarm nodes, stop having solitary points the failure. There are multiple methods of accomplishing this. Girlfriend can connect your swarm nodes through multiple, distinct networks. Alternatively, you can attach your swarm nodes through one network the is constructed with teamed network adapters, redundancy switches, redundant routers, or similar hardware that removes solitary points the failure.