A) You already have a SAN or iSCSI. Please note that my hardware is not supported and such this particular Cluster could not find a storage device that meets the requirements for a cluster. However I was able to create a Windows Cluster made out of two virtual nodes. The only issue is that there is no storage device available in my environment that my Windows servers will accept 🙁
B) The servers that will be used as a cluster are part of a Windows Domain. See here how to create an Active Directory and how to join a machine to a Windows Domain
At this point we need to create a shared storage device where both Nodes (1-2) will write. On these nodes we will install SQL. Both nodes will access the same storage device. So if Node1 fails for example, Node2 will be aware of the state of the data because it will keep reading the data on the shared storage device.
We will use FreeNAS to create this shared storage. This too will be a virtual server. We will dedicate 40Gigs of hard drive space for the actual Operating System to be installed. We will also dedicate 400Gigs of hard drive space that the Nodes will be using.
In addition to that we will dedicate another 500MBs of Hard Drive space that will be used as a Quorum. The Quorum contains data that both Nodes read in order for all nodes to be aware of the state of the Cluster.
In this step we will add our ESXi server to our Windows Domain. Reason being is because this will create a distributed account environment where all the accounts are handled at the Domain Controller level. This allow for an easier administration of the ESXi server it self.
In this step we will add DNS records to the Primary Domain Controller and also setup replication of those entries to the secondary Domain Controller. Propagation of that data will be done automatically and only the Primary Server will be allowed to update records. The secondary Domain Controller will not have the ability to update DNS records on the Primary Domain Controller. We will also add aliases for our servers.
In this step for reasons of redundancy we will add another Domain Controller to the ActDir.lol domain. Essentially this is done in case the primary Domain Controller goes down, the next available Domain Controller will take over. The standby Domain Controller mirrors the data in the primary Domain Controller.
In this part we will set up Active Directory Services and DNS servers. Reasons as to why Active Directory will be installed:
1) Integrate the ESXi Server accounts with Active Directory for easier management of the ESXi environment.
2) DNS will be installed with Active Directory and for now it will handle all DNS requests from both Windows and Linux servers. Later on a Linux based DNS server will be deployed.
3) Active Directory will be needed for our Windows SQL failover cluster fail over.
In this step we will install a Windows and Linx operating systems and use them as templates to further deploy other instances of these operating systems in order to minimize deployment time. This entry will not show how to install individual Operating Systems. It will show the principle behind it. Important to remember is that we are not installing these Operating Systems in VMware Workstation but within VMware ESXi and as such a different method of installation is needed. We will need to install the v-Sphere in order to manage our ESXi server. In order to obtain this tool you will have to point your browser to the IP address of your VMWare ESXi server and download the v-Sphere client. Once you reach that web site select Download vSphere Client.
Please also note that you will have to install the client on an actual Physical Windows machine in order to use it. It is my experience that vSphere is problematic when installed in a Virtual Windows Operating System.
Now that we have created the proper environment for ESXi to run under, we will go ahead and install it. Installation of VMware ESXi is very simple and straight forward. In our previous step we setup VMware Workstation 9 to install ESXi. Below are now the steps to install ESXi.
In this part we will install VMware ESXi5 inside VMware Workstation 9. The VMware ESXi guest Operating system has the following setup.
The 1.9 TeraBytes are allocated on the LVM Logical Volume /var. The 1 GB hard drive is allocated for later use for our FreeNAS virtual Server.
So now let us begin creating our virtual VMware ESXi 5 virtual Server.
CPU – Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz
RAM – SDRAM DDR3 1600 22 Gig
Hard Drives – ST2000DL003-9VT1, WDC WD1001FALS-0, ST31500341AS, WD1600AAJS-0
Operating System: Fedora17
This step assumes that you are using Linux Fedora17. This operating system will host VMware Workstation9 and within Workstation9, VMware ESXi will be installed. Reason for this non-orthodox setup is because of budget limitations from my part. Fedora17 is being used to host the virtual environment as a whole and it also serves to execute other server tasks as well. It also assumes that you have purchased VMware Workstation 9. ESXi 5 is free to download from VMWare. In case VMware Workstation 9 is not an option you can use KVM which comes with Linux. View the following link in order to see how to setup KVM on Fedora17 http://www.sfentona.net/?p=907
Once you have downloaded the bundle package for Linux from VMware you will need to ensure your Linux system meets the following requirements.
Kernel Version – The default Kernel that comes with Fedora 17 is 3.3.4-5.fc17.x86_64. However I had no luck making VMware Worksation 9 run under this kernel version. Instead I chose to upgrade the kernel to a newer version which is 3.6.8-2.fc17.x86_64. I also had to install the kernel headers for that Kernel because VMware needs it in order to complete the installation. The following commands had to be run in order to do all of the above.
yum install kernel-3.6.8-2.fc17.x86_64
yum install kernel-headers kernel-devel gcc
After the machine was rebooted and the new Kernel was chosen on the GRUB boot loader VMware Workstation was working was able to complete the installation once it was launched.
In order to avoide Hard Drive thrashing it is highly recommended to use LVM on your Host system. In Fedora17 I have created /var as a Logical Volume and it is 4.2 Tera Bytes.