Okay so I’ve been playing around with openfiler for the past couple of months. We’re trying to setup a Student homedirectory NAS device and have a mirror machine that would take over if our primary dies. Our machines are hand built 13-TB NAS servers using 16 x 1TB Seagate disks and a 16 channel sata2 raid controller from 3Ware. There are several problems that one needs to overcome in this type of setup so I will try to cover it, bit by bit as I finish confirming it at work. As I said we’re using a Super Micro case and motherboard (Dual Quadcore Xeon) and we’ve stuffed a 16 channel 3ware 9650 controller in there. The first issue we had was with hardware and the fact that we had some screwy new firmware on the controller that was not working nicely with our 16 x 1TB seagate drives. We downgraded the firmware and got the machine to post. Then we created a (roughly) 14 TB container in raid-6 mode (16 drives, less 2). We further devided up the space into a 20GB boot partition (using the bios setting in 3ware bios) and a giant (rougly) 13TB partition that will hold our student data. The 20 GB partition will later on hold our swap space and non essential (frequently updated) folders under /var (lock, log, etc.)
We have physically 2 separate machines that are exact copies of each other hardware-wise. The plan initiallly was to use DRBD and heartbeat service to create a High availability NAS cluster, but since we are tying to authenticate (for smb) with our Windows system, we could not get that configuration working (and frankly I still don’t trust DRBD, as good as it is). So we decided to create two USB sticks images. One for master and another for slave. The master will be a machine enrolled into our Active directory domain and the slave will be a passive (private) rsync server. The master USB image is configured with all the AD stuff and two interfaces. One interface serves as the NAS and another runs rsync against our slave/rsync server. When/If the master fails (ie: motherboard failure) beyond recognition, we simply plug the master USB stick into our slave machine and reboot it. Since the machines are exact copies of one another the (old slave) will now be master and once the (old) master is fixed, it will become the new slave/rsync server. Real simple.
So here is Chapter one – How do you get Openfiler 2.3 to boot off a USB stick:
Before you start you’ll need the following:
- Four USB Sticks 2GB+ that are the same brand, size.
- Openfiler 2.3 install CD
- A non openfiler rescue disk (I used a Ubuntu LiveCD) used to fix (reinstall) grub on the USB stick.
Insert your USB stick, and boot from the OpenFiler 2.3 installation CD. At the boot prompt, type expert (for text mode type expert text, I used graphical mode). Manually configure your partitions. I just had one 2Gb partition (ext2) on /. I used ext2 since it has no journal and won’t constantly write to the USB stick. No Swap partition at this point. After the install I noticed that something between 600 and 700 Mb was used for the system, so you might be able to use about 200-300Mb for swap if really needed (however, I doubt the use for a swap partition, as USB storage is really slow). The installer will breeze through to the end. Note that it is realllyyy slow. It took more than an hour on my config. Reboot at the end and get the OF2.3 CD booting again in rescue mode by typing “linux rescue” at the prompt. Once you’re at the prompt mount the USB stick manually (fdisk -l might help as it will print out info about all the disks). My USB stick was /dev/sdc, hence the commands below:
mount /dev/sdc1 /mnt/source
Now you’ve got the partition mounted and your shell chrooted to the root of the USB stick. We next copy the initrd on the USB stick into a temporary directory (on the stick) and uncompress it so we can modify it. You need to do this so that grub can initialize the bootloader ram disk off the USB stick (ie: makes OF installation bootable from USB).
cp /boot/initrd-2.X.X.img /tmp/initrd.gz
cpio -i < /tmp/initrd
At this point we need to edit the “init” file (text file containing kernel module listings that are required during boot). I used vi to do this, not sure if there is another editor available during rescue mode. Find the line containing “insmod /lib/sd_mod.ko” and insert the following snippet under it:
Save the file and follow along with the following commands to physically copy the appropriate kernel modules to the temp directory.
cp usb/storage/usb-storage.ko /tmp/a/lib
cp usb/host/ehci-hcd.ko /tmp/a/lib
cp usb/host/uhci-hcd.ko /tmp/a/lib
cp scsi/sr_mod.ko /tmp/a/lib
find . | cpio -c -o | gzip -9 > /boot/usbinitrd.img
IMPORTANT – Now adjust grub config (/boot/grub/grub.conf) to reflect the change to initrd filename. You should also repeat this on kernel upgrades (but then again, never touch a working system ;)).
More than likely it’s a no go, since the installer did not install grub properly. Now take out your Ubuntu (or other favourite rescue CD) out and boot from it. Don’t use the OF2.3 CD in rescue mode…..IT DOES NOT WORK. Once booted, mount the USB stick on the system and use the following commands to re-install grub:
mount /dev/sdc1 /mnt/source
grub-install --root-directory=/mnt/source /dev/sdc
Reboot and you should be good to go (you will get a couple of Errors during boot about modules already loaded stuff…..ignore). At some point you do want to move some of those auxiliary directories (/tmp/ /var/log /var/lock and others) and swap file off the stick and onto the 20GB portion of our raid-6 we prepped earlier on. Below you find the fdisk -l listing of that “logical disk” (/dev/sdb in our system):
Disk /dev/sdb: 21.4 GB, 21474835968 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 609 4891761 83 Linux
/dev/sdb2 610 621 96390 83 Linux
/dev/sdb3 622 671 401625 83 Linux
/dev/sdb4 672 2610 15575017+ 5 Extended
/dev/sdb5 672 673 16033+ 83 Linux
/dev/sdb6 674 2610 15558921 82 Linux swap / Solaris
here is a breakdown of what goes where (/dev/sdb6 is obviously swap which was prepared with “mkswap” command):
tmpfs /tmp tmpfs defaults,noatime 0 0
tmpfs /var/tmp tmpfs defaults,noatime 0 0
/dev/sdb1 /var/log ext2 defaults 1 1
/dev/sdb2 /var/run ext2 defaults 1 1
/dev/sdb3 /var/cache ext2 defaults 1 1
/dev/sdb5 /var/lock ext2 defaults 1 1
/dev/sdb6 swap swap defaults 0 0
You need to make the above changes to your USB stick’s /etc/fstab, but before rebooting you need to use “cp -a” command to copy all the folders from the appropriate location on the USB stick to the above partitions (by mounting the partitions temporarily/one-at-a-time), just to make sure no process would go crazy if it didn’t find the lock directory (or cache, run, etc.).
Next we want to make four copies of this stick. You can use a Mac or Win (using rawrite) or better yet Linux. It’s important that the stick your copying is not booted. Use the Ubuntu/whatever CD you used ealier and boot it into rescue mode. Go to command line and use “dd” command to create three more copies of the stick you just preped.
Two copies (one for safe keeping) will become your Master USB sticks to boot the machine in Master mode (as described earlier in this article). The other two copies (one for safe keeping) will become your Slave sticks.
These notes have nothing to do with the installation. I’m just putting them down here for safe keeping. Only use these if you’re in trouble.
– If you want to create a “Home Share” and you don’t get the “Make Home Share” button on the interface, something has gone wrong with one of the xml config files. No worries, find and edit the file /opt/openfiler/etc/homespath.xml . Inside it will look something like this:
This is where the problem is. The php code that drives the interface for sharing thinks that there already is a “homes” directory defined, but you know that’s not the case. Since only one homes entry is allowed, the web interface will not give you the option to make your new share the “Home Share”. To fix this we need to take out what’s inside the quotes as the value of homespath. So once that’s done the file will look like this:
Save this file and go back to the share tab in the web interface and you will now get a “Make Home Share” button again.
– If you have upgraded to a Windows 2008 R2 (Win2k8 r2) AD domain and you’re getting authentication errors when accessing your openfiler shares (although everything was working fine under R1) like the ones below:
Nov 16 08:42:02 openfiler winbindd: [2009/11/16 08:42:02, 0] rpc_client/cli_pipe.c:rpc_api_pipe(789)
Nov 16 08:42:02 openfiler winbindd: rpc_api_pipe: Remote machine dc.domain.tld pipe \NETLOGON fnum 0x4005 returned critical error. Error was NT_STATUS_PIPE_DISCONNECTED
[2009/11/16 08:43:12, 1] winbindd/winbindd_util.c:trustdom_recv(269)
Could not receive trustdoms
then your problem (more than likely) is the version of Samba that comes with openfiler 2.3. You need to upgrade to 3.4.5. Run “conary updateall” or do “System Update” from the interface, let it update everything and reboot your machine. Once your machine is back up, leave the AD domain and rejoin it and everything should be fine.
– If you’re having problems accessing a samba share you just created on your brand new openfiler, you might want to check the following. Lets say you have a Volume Group called “bigvg” and a Volume inside that called “studentvol” where you have a share called “test”. If you’re having problems accessing the share by just using something like
smb://openfiler-servername/test you might want to try connecting to the following instead:
This is because by default openfiler tries to be smart and adds the volume group and volume name infront of the sharename you give it. Now, if you have a small installation this can be a pain. The easy way to fix this is to use the “Override SMB/Rsync share name:” field under the “Shares/Edit share” screen. I tend to use the same sharename I initially used (ie: “test” in this case), just to keep it simple. The only thing to remember here is that you want to make sure you don’t override with a duplicate name…..that’s gonna blowup real good.
– Couple of useful commands for Samba troubleshooting…..
To see a list of shares on your openfiler server (note that the unix command will give you those long sharenames:
Unix: smbclient -L OpenfilerServername -U domainloginid
Win: net view \\OpenfilerServername
– There is another issue with this master/slave setup and that is UID/GID synchronization for samba. This comes into play since we’re rsyncing our files from master to slave. This process also transfers their respective UID/GID to the slave machine. If the master fails, our procedure is to turn if off and reboot the slave using the masters USB stick. This works, but all those rsync’ed UID/GID’s will not match when the slave machine is booted using the masters USB stick (samba voodoo that translates windows UID/GID’s to linux UID/GID is kinda random)…..UNLESS YOU DO THE FOLLOWING (taken from Samba How-To):
The idmap_rid facility is a new tool that, unlike native winbind, creates a predictable mapping of MS Windows SIDs to UNIX UIDs and GIDs. The key benefit of this method of implementing the Samba IDMAP facility is that it eliminates the need to store the IDMAP data in a central place. The downside is that it can be used only within a single ADS domain and is not compatible with trusted domain implementations.
This alternate method of SID to UID/GID mapping can be achieved using the idmap_rid plug-in. This plug-in uses the RID of the user SID to derive the UID and GID by adding the RID to a base value specified. This utility requires that the parameter “allow trusted domains = No” be specified, as it is not compatible with multiple domain environments. The idmap uid and idmap gid ranges must be specified.
The idmap_rid facility can be used both for NT4/Samba-style domains and Active Directory. To use this with an NT4 domain, do not include the realm parameter; additionally, the method used to join the domain uses the net rpc join process.
An example smb.conf file for and ADS domain environment is shown below:
# Global parameters
workgroup = KPAK
netbios name = BIGJOE
realm = CORP.KPAK.COM
server string = Office Server
security = ADS
allow trusted domains = No
idmap backend = idmap_rid:KPAK=500-100000000
idmap uid = 500-100000000
idmap gid = 500-100000000
template shell = /bin/bash
winbind use default domain = Yes
winbind enum users = No
winbind enum groups = No
winbind nested groups = Yes
printer admin = "Domain Admins"
In a large domain with many users it is imperative to disable enumeration of users and groups. For example, at a site that has 22,000 users in Active Directory the winbind-based user and group resolution is unavailable for nearly 12 minutes following first startup of winbind. Disabling enumeration resulted in instantaneous response. The disabling of user and group enumeration means that it will not be possible to list users or groups using the getent passwd and getent group commands. It will be possible to perform the lookup for individual users, as shown in the following procedure.
The use of this tool requires configuration of NSS as per the native use of winbind. Edit the /etc/nsswitch.conf so it has the following parameters:
passwd: files winbind
shadow: files winbind
group: files winbind
hosts: files wins
The following procedure can use the idmap_rid facility:
1. Create or install an smb.conf file with the above configuration.
2. Edit the /etc/nsswitch.conf file as shown above.
root# net ads join -UAdministrator%password
Using short domain name -- KPAK
Joined 'BIGJOE' to realm 'CORP.KPAK.COM'
An invalid or failed join can be detected by executing:
root# net ads testjoin
[2004/11/05 16:53:03, 0] utils/net_ads.c:ads_startup(186)
ads_connect: No results returned
Join to domain is not valid
The specific error message may differ from the above because it depends on the type of failure that may have occurred. Increase the log level to 10, repeat the test, and then examine the log files produced to identify the nature of the failure.
4. Start the nmbd, winbind, and smbd daemons in the order shown.
5. Validate the operation of this configuration by executing:
root# getent passwd administrator
Please note that the update version of SAMBA that gets installed after you do “conary updateall” (see above) has a option for this under “Advance” tab of the Accounts section.
5 responses to “Openfiler install on large disk + failover setup + usb install”
Thankyou so much Many Ayromlou!!
I Follow yuor guide to install openfiler into usb stick and finally I fix grub with “super grub cd” and change manual into grub.conf the partition to load and Openfiler start by USB stick.
Thanks again for your work
Ivan from Italy
Great post for 2.3, Many….
One question: I am not able to ssh or login to the console as it says "This interface has not been implemented yet" – how did you access /etc/fstab on the stick AND the filesystem you created on your RAID6 (to relocate /var directories with cp -aR) simultaneously?
Hoping there's some secret password cuz openfiler login doesn't work for me….
Well, not sure if that's one question or two. To get ssh working you might have to login to the web interface and activate the ssh server. I forget which tab allows you to do that, but it's in there. The login and password should be whatever you use to login through the command line console screen on the physical machine.
To answer your other question (not sure how they relate…..hopefully you can figure it out). You boot the machine using the USB stick. So fstab will be were it normally is (no magic there), under /etc/fstab. I call those partitions on the raid6 physical disk (not the stick), the receiving partitions. They will be receiving the files that are on equivalent folders on the stick through the cp -aR process. All you have to do is mount them (one at a time) under some temporary path (eg: /mnt/tmp), do the copy and unmount them.
You should be able to do this from the command line console and/or the ssh interface (once you get it working).
Hope this helps….let me know if you have other questions…..
Thanks for the response. I looked everywhere in that GUI and all I've found is a Secure Console link under the system tab. When I click on it I get a new window with the title Secure Shell but nothing else. Perhaps I'm missing a plug-in for my browser that Openfiler is not telling me about. In any case, I may remount the stick on my trusted Ubuntu box and attempt to tweak sshd_config accordingly, but at this point I just want to get my new NAS operational since my old one died…good thing I have a backup! ;)
So, I'm thinking now of buying 8GB micro SDHC cards (class 6) with a usb reader and install Openfiler on that without moving filesystems to the RAID. Openfiler states it 'recommends' 2GB for OS and 1GB for swap so 8GB 'should' be enough. Having said that, let me ask a different question although it may be too early for you to tell based on how long you've had this in production: now that you have those filesystems moved to the RAID6 and told Openfiler where to write to, how much disk and how fast is Openfiler consuming on your 20GB RAID partitions? OK….that was a 2 part question… :)
Can I suggest right off the start that you don't use microSDHC. They are Sllooooowwwwww……Use a fast stick or SDHC or fast CF card. Trust me, you'd be glad you did when your boot time is lessened and you don't have to wait for the sluggish commandline to kick in.
About openssh, you should be able to get to your machine via what they call secure console from the system tab. You do need java plugin installed in your browser. You can also use just standard ssh command (if you're on a mac or linux and/or if you have ssh installed on windows) "ssh ipaddressofserver -l root" to connect and login.
The 20 GB partition on my machine was just a fluke really. It (Openfiler) does not consume nearly that much and has processes that clean those log/tmp directories. You will be good with 1/10th of that (ie:2GB for all those extraneous folders I moved off the stick). The only reason I used 20 was that I had already done a raid-6 binding/stripping process on the disks and didn't want to have to sit through another 53 hours of watching blinky lights as the 13.5 TB raid-6 volume gets recreated.
If I had to do it all over again, I would get a multicard SATA internal card reader (you know the ones that look like a 3.5 inch floppy drives) with the fastest SDHC or CF card (one of those Extreme III or IV cards from sandisk for example) and install on that.
JUST PLEASE MAKE SURE THAT YOU IMAGE THE SYSTEM DRIVE (BE IT USB STICK OR CF OR SDHC) TO A FILE AND BURN THE FILE ONTO A CD/DVD FOR SAFE KEEPING. USB DRIVES DO FAIL AND IF YOU LOOSE YOUR SYSTEM DRIVE, YOU'RE GONNA HATE YOURSELF (AND ME) :-).
Let me know if you run into trouble or have questions.