Posts Tagged ‘san’

Solaris cluster, MPxIO, Zpools.

Friday, June 26th, 2009

This is remarkably straightforward.

Two nodes, alike in dignity. We preconfigured MPxIO on them prior to installing the cluster software. We also allocated a small LUN that was visible to both nodes prior to the installation, and made sure they could see it: this was intended to act as a quorum device.
(more…)

Braindump of FC multipathing on Linux

Friday, March 27th, 2009

Start with the cards:

# /sbin/lspci

04:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
04:00.1 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
05:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
05:00.1 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)

Two dual-port cards, effectively.

Kernel module for these:

# /sbin/lsmod | grep ql
qla2xxx 1079969 0
scsi_transport_fc 73673 1 qla2xxx
scsi_mod 188665 7
sg,qla2xxx,scsi_transport_fc,mptsas,mptscsih,scsi_transport_sas,sd_mod

qla2xxx, already loaded.

To find WWNs, on RHEL 5.x (centos 5.x),

WWN information and other FC stuff is under
/sys/class/scsi_host/hostN/device/fchost:hostN/port_name
for values of N.

# ls /sys/class/scsi_host/host*/device/fc_host*/port_name
/sys/class/scsi_host/host1/device/fc_host:host1/port_name
/sys/class/scsi_host/host2/device/fc_host:host2/port_name
/sys/class/scsi_host/host3/device/fc_host:host3/port_name
/sys/class/scsi_host/host4/device/fc_host:host4/port_name

(there’s a host0 on this box which is the onboard SAS controller)

# cat /sys/class/scsi_host/host*/device/fc_host*/port_name
0x2100001bxxxxxxxx
0x2101001bxxxxxxxx
0x2100001dxxxxxxxx
0x2101001dxxxxxxxx

So these are the addresses that need zoning.

Scanning for LUNs:

# echo "- - -" > /sys/class/scsi_host/host1/scan
[wait a bit]
# more /proc/scsi/scsi

[new paths to luns show up, mpaths appear under /dev/mapper if that’s configured]

After scanning for luns (so they show up in /proc/scsi/scsi)…

/sbin/chkconfig multipathd on
– edit /etc/multipath.conf to look like the one on bastet. I made the following changes:

blacklist {
devnode "*"
}
blacklist_exceptions {
devnode "^sd[b-z].*"
}

(since sda was always the local SAS root device) and modified the defaults {} section to use “failover” rather than “multipath”, “multibus” etc, which appears to be fine.

After you’ve done this you should be able to do the following:

/sbin/multipath -v2 -d

-d is for dry-run (make no changes). It’ll tell you that it’ll make mpath0, mpath1, mpathxxx, tell you the ID of that volume (a long hex string) and what paths are available to it, what SCSI vectors those use, etc.

You can then add a multipath{} section to /etc/multipath.conf which lists that WWID and gives it an alias (ie, doesn’t use “mpath0” etc) – we used “sb2cc-lun5” as an example alias.

Start multipathd if it’s not running: /etc/init.d/multipathd start and (with a possible /sbin/multipath -v2 to search for new paths and make them available or to rescan multipath.conf) you’ll find /dev/mapper/sb2cc-lun5, etc, as new block devices.

We labelled these with e2label, made an ext3 filesystem (takes several minutes for a multi-TB filesystem) on there and put entries into /etc/fstab as follows:

/dev/mapper/sb2cc-lun5 /sb2cc-lun5 ext3 defaults 1 3

You probably don’t want to use ext3 for a large filesystem.

These come back happily AND IN A STABLE FASHION on a reboot, which is just as well because the raw “path” devices, sdb..sdi were juggled on the reboot – this is just down to how fast the various HBA scans come back.

– Listing current multipath settings: multipath -v2 -l
– Listing what would be changed (no changes made, dry run): multipath -v2 -d
– Making those changes live: multipath -v2

Getting rind of unwanted LUNs:

# /sbin/multipath -v3 -d
[nothing changes, need to rescan]
# echo "- - -" > /sys/class/scsi_host/host1/scan
[wait a bit]
# more /proc/scsi/scsi

[new paths to luns show up, mpaths appear under /dev/mapper

At that point I noticed that the backup host group on sb3cc has access to some stuff it shouldn’t; turned that off. Now need to flush the paths to the LUNs that’ve disappeared:

# /sbin/multipath -F
# /sbin/multipath -v2 -l

only two paths show, scan for the LUNs on the other controller…
# echo "- - -" > /sys/class/scsi_host/host2/scan

Now you can create the multipath devices.
# /sbin/multipath -v2
# /sbin/multipath -v2 -l

two large LUNs show up…

mpath6 (36000402001fc475761ee919c00000000) dm-5 NEXSAN,SATABeast
[size=6.4T][features=0][hwhandler=0]

^^^ this is sb3mvb (…4757 in WWID)

mpath5 (36000402001fc46db60ef903200000000) dm-2 NEXSAN,SATABeast
[size=6.4T][features=0][hwhandler=0]

^^^ this is SB3cc (….46db in WWID)

At that point you I edit /etc/multipath.conf to give these useful device names, and run /sbin/multipath -v2.

Mapping MPxIO paths into something sane (array & LUN ids)

Thursday, April 24th, 2008

OK, so MPxIO “Just Works”. I exposed a small number of LUNs (four, to be precise) from each of a pair of SataBeasts. One of those had something on it: to wit, a prototypical “bb-archive” zpool.

zpool status etc. will show the MPxIO devices that comprise the particular zpool in question – but what about the other seven LUNs? And what about when it comes to stiching LUNs together, and so on? This is going to turn into a cross-referencing nightmare!
(more…)

Multipathing success.

Thursday, April 24th, 2008

# mpathadm list lu
/dev/rdsk/c7t6000402001FC19CA6E9E7EFE00000000d0s2
Total Path Count: 2
Operational Path Count: 2

… and so on :-)

Solved: should read the SB docs more closely

Wednesday, March 5th, 2008

OK, now I know why this isn’t working. Basically any array (Nexsan speak for Raid Group) is owned by only one controller at once – and its LUNs are only delivered via a single controller.

If we’d used QLE2462s and dual-attached the X4100 then we’d see multiple paths to LUN20, but they’d all go via SB c0. If the array is failed over to SB c1 then they’d show up there instead.

What I now need to do is to set up MPxIO, configure the first paths to the LUN, then try administratively failing its owning array over to the other controller. It remains to be seen whether this will “just work” or whether I need to further configure MPxIO to know that both results are basically one and the same thing (or even if that’s actually possible).

In any regard, this is looking very good.

At the other end of the problem, we installed the trusted extensions to Solaris which appear to permit multiple NFS views. We’ll confirm or deny this shortly. TSOL is a bit of a pain to configure – hopefully we can keep the setup requirement to a minimum – if Solaris 10 grows zonable NFS service (or I build a userland NFS server which I can configure myself) this might become more straightforward.

More SATABeast multipathing…

Tuesday, March 4th, 2008

Zoning changes get picked up properly…
(more…)

Bugger, it “Just Works”

Monday, March 3rd, 2008

Due to the unfortunate ability of the SATABeast to receive and process requests on all four controllers, failing the NFS array over to the second controller makes no discernible difference to the host: it can still talk to c0, alas.

So the only way we’re going to test this is with some brutal combination of FC zone modification and/or controller shutdown.

Multipathing progress…

Monday, March 3rd, 2008

This is looking better. Note in the previous post, the scsi_vhci.conf entry conforms to the scsi_inquiry format: that is, eight characters are used for the vendor ID (NEXSAN followed by two spaces), with the product ID appended.

Now the scsi devices appear under /devices/scsi_vhci. However, cfgadm still only shows the first path (ie, the first controller) as configured for each SATABeast; “cfgadm -c configure c4” still fails.
(more…)

Solaris/amd64, SATABeast, the story so far.

Monday, March 3rd, 2008

Looks like multipathing via the generic scsi_vhci may be possible. Thus far:

Add this to /kernel/drv/scsi_vhci.conf:

device-type-scsi-options-list = "NEXSAN SATABeast", "symmetric-option";
symmetric-option = 0x1000000;

Currently cfgadm -c configure c4 is failing, complaining about being unable to create device nodes.

Having said that, the single-pathed zpool create Just Works.

Work progresses…