Building a 2+1 Exchange 2007 Single Copy Cluster
The process of building a SCC Active/Passive cluster is well documented. I recommend the following location on MSExchange.org by Henrik Walther
Interestingly, in Henrik’s article, he mentions Exchange 2007 doesn’t support active/active clusters; only active/passive clusters. However, at some point Microsoft started supporting N+1 clusters which have at least one passive node. They high recommend not using this configuration but some organizations may choose this high availability strategy. The following link supports this. There is no other discussion on how to build one except for a brief mention that it is basically the same as building an active/passive cluster. http://technet.microsoft.com/en-us/library/aa998607.aspx
So here are some basic steps you should perform to building this. I successfully built a 2+1 cluster using VMWare Server 2.0. I’ll add the steps to do this as well.
1. On a VMWare Server, create the virtual machines. (I used a Windows Server 2003 Enterprise Edition base image to ensure identical configurations for each node.)
a. Add a second network adapter for the private heartbeat network.
2. Power up each node, adding them to a domain and adding a cluster service account to the Administrators local group.
3. Test network connectivity on each network adapter.
4. Power down the virtual machines.
5. On the first virtual machine, create the shared storage drives for the quorum and the database storage for each active node. When adding the drives, select to add them to a second SCSI controller.
a. In the virtual machine’s .vmx file, add the following entry scsi<#>.sharedBus = "virtual" (# is the number of the SCSI controller) and disk.locking = "FALSE"
6. On the other virtual machines, add the drives created above in the same order and identical SCSI assignments.
a. In these virtual machines’ .vmx files, add the same entries as above.
7. Power up the first virtual machine (Node1) and initialize, format and assign drive letters to the drives through Disk Management.
8. Test access to the drives by creating test text files on each drive. ( I like to name the files as the drive letter to identify the original drive assignment on other nodes.)
9. Shut down the node and power up another node. Assign the drive letters to the attached drives and test access.
10. Perform step six for the last node.
11. Shut down the last node and power up the first node.
12. From there, perform a cluster build as described in the above Part 2 article.
a. On the first node, create the cluster.
b. Power up the other nodes and add them to the cluster.
c. Test the successful transfer of the cluster group between each node.
13. Perform adding the first active mailbox node described in the above Part 3 article.
14. Perform adding the first passive node described in the Part 3 article.
15. On the second active node, create the active mailbox node with the same steps for creating the first active mailbox.
After this, the configurations are primarily according to preference. I prefer the following:
· Configure the private network to allow only inter cluster communication.
· Move the data drives for each mailbox node to their respective cluster group. This requires taking the mailbox cluster group offline. Use the stop-clusteredmailboxserver powershell cmdlet. (I successfully corrupted portions of the trasnsaction logs using the Cluster Administrator.)
· Assign the data drives possible owners of the respective active and passive nodes.
· Assign the Information Store service dependencies of the storage drives.