Thursday, 12 January 2012

RedHat Cluster Suite and Conga - Linux Clustering

This how to describes an easy step by step installation of the Red Hat Cluster Suite on 3 centos nodes and prepare them as nodes of a cluster. You will also install the Management suite which is web based and is known as Conga.

You will use 3 nodes to form the cluster and 1 node as the cluster management node and as a cluster node it will not take part. All the nodes and the management node should be resolvable either by host file entries or by DNS.

Cluster Nodes:
cnode1:
    eth0-192.168.2.151/24 - external-lan
    eth1-192.168.1.200/26 - internal-lan cluster
    eth2-192.168.200.1/26 - internal-iscsi-san lan
cnode2:
    eth0-192.168.2.152/24 - external-lan
    eth1-192.168.1.201/26 - internal-lan cluster
    eth2-192.168.200.2/26 - internal-iscsi-san lan
cnode3:
    eth0-192.168.2.153/24 - external-lan
    eth1-192.168.1.202/26 - internal-lan cluster
    eth2-192.168.200.3/26 - internal-iscsi-san lan

Cluster Management Node:
centos:
    eth0-192.168.2.150/24

As the cluster, its management interface and the service deamons use tcp, for the purpose of this article you can disable the firewalls at these nodes.

OS - All Nodes:
    CentOS 6 Minimal
   
Cluster Nodes - Software Installation:
    yum groupinstall "High Availability"
    yum install ricci
   
Cluster Management Node - Software Installation:
    yum groupinstall "High Availability Management"
    yum install ricci

Copy this initial sample cluster config file into /etc/cluster/cluster.conf at all the nodes cnode1, cnode2, cnode3.
<?xml version="1.0"?>
<cluster config_version="1" name="cl1">
    <clusternodes>
        <clusternode name="cnode1" nodeid="1"/>
        <clusternode name="cnode2" nodeid="2"/>
        <clusternode name="cnode3" nodeid="3"/>
    </clusternodes>
</cluster>

This initial file states that the cluster name is 'cl1' and defines the cluster nodes.

Now some services have to be configured and started at the nodes first and then at the management node as below.

Cluster Nodes:
    chkconfig iptables off
    chkconfig ip6tables off
    chkconfig ricci on
    chkconfig cman on
    chkconfig rgmanager on
    chkconfig modclusterd on

    create a password for the ricci service user with 'passwd ricci'
   
    service iptables stop
    service ip6tables stop
    service ricci start
    service cman start
    service rgmanager start
    service modclusterd start

Cluster Management Node:
    chkconfig iptables off
    chkconfig ip6tables off
    chkconfig luci on
    chkconfig ricci on

    service iptables stop
    service ip6tables stop
    service luci start
    service ricci start

luci service is the management service that presents the web based cluster interface via https at port 8084 and can be accessed in any browser at https://<cluster management node FQDN or hostname:8084>/

ricci service is the under laying deamon that helps in cluster configuration sync and file copy, service start, stop etc. and uses tcp port 11111.

luci : 8084
ricci : 11111

cman, rgmanager and modclusterd are the actual cluster services which futher start other services that actually make the clustering happenand keep it live.

Open a browser and enter the conga node url which in this case is https://centos:8084/


After clicking 'ok' to the initial warning information you will be presented with the login screen. Enter the root user and root password of that system and start the interface.

Now click Add cluster and add the first node cnode1 and the ricci password, click ok and it will detect the other two nodes also, add the ricci passwords and the cluster will be added to the Cluster Management interface.


The cluster can now be managed and configured from this interface.


Care should be take as the cluster.conf file sometimes does not get synced to all cluster nodes they will get fenced due to version misconfiguration. At such times copy the cluster.conf file from node1 to all the other nodes.


If all the nodes are in sync then the up time is shown in the cluster nodes list.

Getting a cluster, configuring and managing, your cluster is up, live and configured in no time and later other clusters can be added into this management interface for easy maintenance.

- Bellamkonda Sudhakar

Thursday, 5 January 2012

Microsoft Windows ISCSI SAN & Clusters - 2

Here in page 2 we will connect a client to a SAN iSCSI Target that we created in the previous blog presented here http://sudhakarbellamkonda.blogspot.com/2012/01/microsoft-windows-iscsi-san-clusters-1.html

 Lets Assume that the system is a 2003 server and the Microsoft iSCSI initiator is installed. There will be a link to the iSCSI initiator on the desktop and in the control panel. Double click it to open up the interface and follow the screen prints.

 When installing the Microsoft iSCSI Initiator tick the MPIO support for iSCSI.
 Open the Interface and click the Discovery tab and click Add

 Enter the ip address of the SAN iSCSI Target server

 Next click tab Targets and it should show the disk. Click logon and tick Automatic restore connection at system reboot.

 Click OK and the SAN iSCSI disk is added to the system. Open the disk management interface and check if the disk is present. Format it and check that the disk drive letter is appropriate.



If you are using this SAN iSCSI disk for a cluster then add this disk at all the nodes as shown and add it to the cluster resource.
- Bellamkonda Sudhakar

Microsoft Windows ISCSI SAN & Clusters - 1

SAN - short for Storage Area Networks are costly and complex and there are not many software that is free that can be used by developers and architects who can use it to test and configure clusters.

Starwind is one company that makes available a free version of its SAN product with some limitations, but these are in no way restrictive to create a shared iSCSI SAN for clustering purpose and can be fairly used within a DEV, TEST, SOHO or for internal non-production use with ease.

Here I am going to talk to you about creating a viable SAN for such use. This can be installed over a Windows 2003 or a Windows 2008 server.

System:
  • Windows 2008 or 2003 (if it is 2003 then the Microsoft iSCSI Initiator needs to be installed which can be downloaded from Microsoft Download Center)
  • 100G of free HDD space or a Second HDD is much better
  • Two network Interfaces
    NetworkCard1 : 192.168.2.100
    NetworkCard2 : 192.168.2.51

    These IP's can be of your choice as per your requirements.

    Starwind software free can be downloaded from http://www.starwindsoftware.com/download-starwind-free and you will need to regester so that the license file is sent to you. This license file needs to be copied to the above server.

    Software file       - starwind.exe
    License key file  - licensekey.swk
  • If using a Windows 2003 server then download Initiator-2.08-build3825-x86fre.exe from Microsoft's download center.

    First install the iSCSI Initiator if Windows 2003 and make sure that the service is started and set to automatic for both Windows 2003/8 servers.

    Double click the starwind.exe file and follow the screen prints.









If the iSCSI Initiator service is not started and set to Automatic then this error comes up and Installation does not proceed till the service is started and set to Automatic start.






A desktop icon will be created. Double click it to Initialise your SAN.



The SAN Console minimises and sit's in the system tray as seen here. Double click it to bring up the management interface.
Click Add Host

Keep defaults is OK
Before the server can be registered with the obtained license connect the management interface to the server host

Username is 'root' password is 'starwind'
Click Host -> Registrations -> Install License and then browse to the license file and load it.

Now to create a virtual disk that can be presented to clients on the Storage Network. Click 'Add Target'
 A name alias that is easy for identification, do not change the target name until and unless you understand iSCSI naming.




 The complete path and filename, in this instance sqldisk1.ibv

 Tick the Allow multiple concurrent connections so that this can be used for clustering



 Clicking on the configuration tab shows the network ip's that this disk will be advertised over the network.

To connect to this disk from a client system click here. http://sudhakarbellamkonda.blogspot.com/2012/01/microsoft-windows-iscsi-san-clusters-2.html