This is a list of useful commands / tips for OpenSolaris. I know, this is probably all common knowledge, but I don't use Solaris as much as I used to, so it's difficult to remember all the new stuff.
There's lots of new cool stuff to do with ZFS. Here are a few examples what you can do with various commands, most of these have lots of different options and features.
- List all ZFS data sets: % pfexec zfs list
- Show all boot environments: % pfexec beadm list
Sometimes /var/pkg will get filled up with old "junk". You can safely remove /var/pkg/download. To keep things clean automatically, run
% pfexec pkg set-property flush-content-cache-on-success True
This is an ongoing set of benchmarks against various Unix (Linux, Solaris, MacoSX etc.) distributions, trying to get a feel for how they compare on CPU performance. This is by no means a complete test, I'm only running a few benchmarks on each distribution, and I'm primarily only looking at raw CPU performance. Also, in order to make this reasonably easy for me to handle, all tests are done inside a VirtualBox virtual machine, and only one CPU is ever used (meaning, the tests don't show how well the Unix flavor manages SMP.
The test system is a Linux FC9 box, with a Core2 6600 CPU running at 2.40GHz, with 4MB of cache. Each Unix distribution is installed in a VirtualBox (v2.10) virtual host, with 512MB of RAM (plenty for all tests). All Linux distributions were updated with all the latest patches available at the time of the test, running whatever kernel, compilers and libraries available at the time.
Each benchmark is run three times on each distribution, and the best result is picked from each test. Currently the only benchmark I run is the ByteMark, called nbench. The source for this is available at http://www.tux.org/~mayer/linux/bmark.html . As mentioned before, this not only benchmarks the kernel, it also exercises the compiler suite, and supporting libraries (e.g. glibc). But then again, someone picking a distribution probably wants to see the "whole picture", right?
Next steps (and caveats)
I'm planning on running lmbench on these machines as soon as possible, to see if those numbers shows any more significant differences. I'm also planning on adding benchmark results for SUSE, which is a popular Linux distribution. Unfortunately I can't test MacOSX on this hardware, since I can't install it under VirtualBox. In particular, lmbench would probably be more useful without Virtualization, for testing how well the system behaves on things like disk I/O, network and SMP scalability.With time permitting, and if/when I can free up the hardware, I think it'd be useful to do these benchmarks without virtualization involved. It's unclear today what impact VirtualBox has on these benchmarks (but, I'm hoping it's relatively fair across all the distributions). When Michelle lets me upgrade my desktop, I'll use the old system for rerunning these tests.
I use VirtualBox to run various Unix distributions on my home desktop. I had an OpenSolaris installation that somehow got unbootable during a regular package upgrade, and unfortunately I had some files (package info files) on my home directory, that I really wanted back. Since I couldn't figure out how to make this installation bootable again, I started with a fresh OpenSolaris installation (under VirtualBox), hoping that I could somehow mount the old VDI. And yes, it did work, after some experimentation with the zfs and zpool command line utilities.
The first thing I had to do was to configure VirtualBox to make the old VDI (disk partition) available to the new OpenSolaris installation. This is really easy to do, so I'm not going to go into details here. Once that is done, the new disk showed up as "c0d1", so now I had to figure out how to activate the ZFS pool. Since this pool has the same name (and mount point) as my new OpenSolaris installation, it turned out to be a bit trickier than I thought. First of all, I had to find the pool ID of the pool, which you get by simply running zpool import. This is the easy part, and in my case, the ID was "9894566475259874708". Now, to import this pool, we have to rename it as well:
# zpool import -f 9894566475259874708 lpool
The -f was necessary to force it to do the import, since it still seemed to think that I wanted to import over the existing rpool name. And, I also got warnings about this with the -f option, but it seemed to be harmless. Once this was done, I had to change the mountpoint of the home directory to something else (e.g. "/OLD"), and finally I could mount it:
# zfs set mountpoint=/OLD lpool/export/home # mount /OLD
I've been fiddling with OpenSolaris lately, and one obviously require package is dearly missed: Emacs. I tried to compile it myself, but couldn't get "configure" to pass, so I decided to look around for alternatives. It turns out SunFreeware has a prebuilt Emacs package, somewhat suitable for OpenSolaris. So, not knowing anything about the IPS system, I fumbled around a bit, until I figured out that the following commands added this repository (or authority I think it's called):
% pfexec pkg image-create -F -a sunfreeware.com=http://pkg.sunfreeware.com:9000 /var/sunfreeware % pfexec pkg set-authority -O http://pkg.sunfreeware.com:9000 sunfreeware.com % pfexec pkg refresh --full # Now I can run % pfexec pkg search -r emacs % pfexec pkg install pkg://IPSFWemacs # Copied from above search results
This version of emacs is a bit old (21.x), and it doesn't seem to work when started with an X11 window. But at least I don't have to suffer with vi any more.
Update: I made an OpenSolaris package with Emacs v22.2 for x86, which has both X11 support (emacs) and a non-X11 version (emacs-nox). The tar ball with the package is available on my FTP site. This might be a usable alternative of Emacs until the official OpenSolaris IPS adds an emacs package.
I've been playing around with Solaris today, and it's just as good as I remember (except still missing all the tools I need, like Emacs). However, I was unable to NFS mount my home directory from my RHEL4 box. I would get an error like
bash-3.00# mount -o ro machine.ogre.com:/export/disk /foo/bar nfs mount: mount: /foo/bar: Not owner
and my /var/adm/messages would have
Apr 3 22:22:36 solaris10 nfs: [ID 435675 kern.warning] WARNING: NFS server initial call to machine failed: Not owner
RHEL4 does NFS v4 (or so it thinks at least), and Solaris is not happy with something there. Obviously there must be a way to get it to work, but right now, I just needed to get it working. So, until I figure out what in NFS v4 is causing this, I decided to make Solaris just use NFS v3. There's a couple of ways to do this, easiest is to just use the vers=3 option to mount, e.g. in /etc/auto_home do something like
leif -rw,vers=3 machine:/export/home/leif
Or, you can change the defaults in Solaris 10, by editing /etc/default/nfs, and modify
After changing this default, you have to run
# svcadm refresh svc:/network/nfs/client:default
Alternatively, you could probably also disable NFS v4 on the RHEL4 box.