Thursday, August 22, 2013

using existing legacy geom mirror disks with nas4free / freenas / later versions of BSD

I am in the process of retiring my old FreeBSD 5.5 based file server and replacing it with a more modern BSD distro that is aimed specifically at file serving duties. The setup of the current storage is all the disks are identical pairs and are configured using geom_mirror (RAID1) for redundancy. I'll be using a new motherboard and memory on the new server but will be re-using the existing disks. This got me to thinking if it would be a simple case of just plugging in the disks and having the new OS recognise them as containing GEOM metadata and thus allowing me to make use of them with minimal hassle?

I decided to prove or disprove this theory using my VMWare ESXi server.

Firstly, I created a FreeBSD 5.5 VM, and then a FreeBSD 9.0 VM. I decided to do it this way instead of using the actual NAS4Free distro as I know the NAS4Free distro is based upon FreeBSD9, so the end result should be the same!

I add two small extra disks to the 5.5 box and then go ahead and configure a RAID1 mirror:

$ su
Password:
bsd55# uname -a
FreeBSD bsd55 5.5-RELEASE FreeBSD 5.5-RELEASE #0: Tue May 23 14:58:27 UTC 2006     root@perseus.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  i386
bsd55# kldload /boot/kernel/geom_mirror.ko
bsd55# gmirror label gm055 da1 da2
bsd55# gmirror status
        Name    Status  Components
mirror/gm055  COMPLETE  da1
                        da2
bsd55# newfs /dev/mirror/gm055
/dev/mirror/gm055: 512.0MB (1048572 sectors) block size 16384, fragment size 2048
        using 4 cylinder groups of 128.00MB, 8192 blks, 16384 inodes.
super-block backups (for fsck -b #) at:
 160, 262304, 524448, 786592
bsd55# mount /dev/mirror/gm055 /mnt
bsd55# echo 'proof of conecpt' >/mnt/poc
bsd55# ls -l /mnt
total 4
drwxrwxr-x  2 root  operator  512 Aug 21 22:43 .snap
-rw-r--r--  1 root  wheel      17 Aug 21 22:49 poc
bsd55# cat /mnt/poc
proof of conecpt
bsd55# shutdown now
Shutdown NOW!
shutdown: [pid 574]
bsd55#                                                                          
*** FINAL System shutdown message from nf@bsd55 ***
System going down IMMEDIATELY



System shutdown time has arrived
Next I turn the power off on the 5.5 VM and then edit the settings for the 9.0 VM using vSphere client. What I am doing here is attaching the 5.5 mirrored disks to the 9.0 box:



Now I proceed to power on the 9.0 box and ssh into it to see what the new kernel makes of the old disks:
$ su
Password:
You have mail.
root@hg:/usr/home/nf # uname -a
FreeBSD hg 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec  4 09:23:10 UTC 2012     root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
root@hg:/usr/home/nf # kldload /boot/kernel/geom_mirror.ko
root@hg:/usr/home/nf # dmesg
GEOM_MIRROR: Upgrading metadata on da2 (v3->v4).
GEOM_MIRROR: Device mirror/gm055 launched (2/2).
GEOM_MIRROR: Upgrading metadata on da1 (v3->v4).
root@hg:/usr/home/nf # fsck -t ufs /dev/mirror/gm055
** /dev/mirror/gm055
** Last Mounted on /mnt
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
3 files, 3 used, 247891 free (27 frags, 30983 blocks, 0.0% fragmentation)

***** FILE SYSTEM MARKED CLEAN *****
root@hg:/usr/home/nf # mount /dev/mirror/gm055 /mnt
root@hg:/usr/home/nf # ls -l /mnt
total 4
drwxrwxr-x  2 root  operator  512 Aug 22 00:51 .snap
-rw-r--r--  1 root  wheel      17 Aug 22 00:51 poc
root@hg:/usr/home/nf # cat /mnt/poc
proof of conecpt

As can be seen above, the kernel recognised the legacy geom mirrored disks and performed a metadata upgrade from v3 to v4. To further this I prove that the original data can be read back.

So if anyone is thinking of moving GEOM based disks from an old BSD box to a newer distribution, fear not - as usual BSD is rock steady and can be relied upon!

Wednesday, August 07, 2013

imapsync couldn't append / NO APPEND failed zimbra invalid name

I am in the process of migrating user mail from a Zimbra ZCS v6.x FOSS BSD server to a Zimbra ZCS v8.x FOSS VMWare appliance Linux server. I used the following Zimbra command line tool to produce a compressed archive for each user via SOAP/REST calls:
/opt/zimbra/bin/zmmailbox -z -m user@domain.com getRestURL "//?fmt=tgz" > /tmp/account.tgz
..on the source server, and:
/opt/zimbra/bin/zmmailbox -z -m user@domain.com postRestURL "//?fmt=tgz&resolve=reset" /tmp/account.tgz
.. on the destination server. Whilst this appeared to work and was pretty fast, when I did some verifications I got the following results:

Decomissioned server:
[zimbra@kronenbourg /usr/home/user1]$ zmmailbox -z -m user1@somedomain.co.uk
mailbox: user1@somedomain.co.uk, size: 6.80 GB, messages: 105766, unread: 49210
mbox user1@somedomain.co.uk>

[zimbra@kronenbourg /usr/home/user2]$ zmmailbox -z -m user2@somedomain.co.uk
mailbox: user2@somedomain.co.uk, size: 2.15 GB, messages: 23920, unread: 210
mbox user2@somedomain.co.uk>
New VMWare appliance server:
zimbra@kronenbourg:~/redolog/archive$ zmmailbox -z -m user1@somedomain.co.uk
mailbox: user1@somedomain.co.uk, size: 8.00 GB, messages: 112836, unread: 52558
authenticated as user1@somedomain.co.uk
mbox user1@somedomain.co.uk>

zimbra@kronenbourg:~/redolog/archive$ zmmailbox -z -m user2@somedomain.co.uk
mailbox: user2@somedomain.co.uk, size: 2.10 GB, messages: 22823, unread: 210
authenticated as user2@somedomain.co.uk
A little worrying to say the least, so I decided to revisit my old "friend" imapsync and perform a "live" mail migration between servers. First of all I had to go ahead and git clone the latest imapsync source and build it on the VMWare appliance Ubuntu box as apt-get installed a very old version. I also had to grab and install an updated Perl IMAPClient module (http://search.cpan.org/~plobbes/Mail-IMAPClient-3.29/) before imapsync would build. I'm using imapsync v1.558 btw. Here is the command line I settled upon:
nohup imapsync --nofoldersizes --usecache --tmpdir /var/tmp --buffersize 81920000 --nosyncacls --syncinternaldates --subscribe --host1 192.168.0.66 --port1 993 --host2 192.168.0.64 --port2 993 --user1 user1\@somedomain.co.uk --password1 password1 --user2 user2\@somedomain.co.uk --password2 password2 --ssl1 --ssl2 --authmech1 PLAIN --authmech2 PLAIN --reconnectretry1 100000 --reconnectretry2 100000 &
Then I just tailed the nohup.out and went to bed :-)... It was around 6 times slower than the REST export option (however this was over a Powerline network) so YMMV, but it's definitely slower as expected. Reviewing the log I noticed hundreds of errors along the lines of:
msg Flagged/63672 {15067}       copied to Flagged/6801       11.80 msgs/s  314.255 KiB/s
msg Flagged/98800 {30349}       copied to Flagged/6803       11.80 msgs/s  314.254 KiB/s
- msg Flagged/99159 {24629} couldn't append  (Subject:[Re: Join my network on LinkedIn]) to folder Flagged: 6552 NO APPEND failed
msg Flagged/111586 {40960}      copied to Flagged/6804       11.80 msgs/s  314.262 KiB/s
msg Flagged/140168 {25505}      copied to Flagged/6806       11.80 msgs/s  314.255 KiB/s
.. as you can see we had trouble around message ID 6804; and digging through the zimbra.log I found the corresponding entry:
2013-08-07 11:33:38,907 INFO  [ImapSSLServer-12] [name=user1@somedomain.co.uk;mid=19;ip=192.168.0.64;] mailop - Adding Message: id=6803, Message-ID=, parentId=-1, folderId=6745, folderName=Flagged.

2013-08-07 11:33:38,986 INFO  [ImapSSLServer-6] [name=user1@somedomain.co.uk;mid=19;ip=192.168.0.64;] imap - APPEND failed: invalid name: wellknown:FLAG0

So some of my mails are tagged, and some of those tag names contain spaces (bad practice for a developer right? :/). So the solution is to SSH onto the old VMWare server and rename the tags with spaces:

mbox user1@somedomain.co.uk> rt "well known" "well_known"
Since we used the --usecache option with imapsync, next time we re-run the migration it will just copy the missing mails (i.e. the ones with a tag whose name contains a space), and now we are closer:
root@kronenbourg:/opt/zimbra/log# su zimbra
zimbra@kronenbourg:~/log$ zmmailbox -z -m user1@somedomain.co.uk
mailbox: user1@somedomain.co.uk, size: 6.76 GB, messages: 104775, unread: 49035
.. but still missing 991 messages!! Looking back at the imapsync output I see other errors:
Host1 uid 63620 no header by parse_headers so taking whole header with BODY.PEEK[HEADER]
Host1 _SOME_FOLDER_/63620 size 2278 ignored (no wanted headers so we ignore this message)
Seemingly imapsync needs help with some headers (possibly emails without a Message-ID according to one of my friends) so adding the switch (--useheader ALL) fixes that:
nohup imapsync --nofoldersizes --usecache --tmpdir /var/tmp --buffersize 81920000 --nosyncacls --syncinternaldates --subscribe --host1 192.168.0.66 --port1 993 --host2 192.168.0.64 --port2 993 --user1 user1\@somedomain.co.uk --password1 password1 --user2 user2\@somedomain.co.uk --password2 password2 --ssl1 --ssl2 --authmech1 PLAIN --authmech2 PLAIN --reconnectretry1 100000 --reconnectretry2 100000 --useheader ALL &
After all this, one issue still remains, that being certain folders are "virtual" and can't be sync'd via normal IMAP. In my case I had two folders to deal with: "Contacts" and "Emailed Contacts". For these folders we fall back to the REST calls detailed above:
/opt/zimbra/bin/zmmailbox -z -m user1@somedomain.co.uk getRestURL '//?fmt=tgz&query=under:"Emailed Contacts"' >emailedcontacts.tgz
/opt/zimbra/bin/zmmailbox -z -m user1@somedomain.co.uk getRestURL '//?fmt=tgz&query=under:"Contacts"' >contacts.tgz
..on the source server, and:
/opt/zimbra/bin/zmmailbox -z -m bec@xzer0.co.uk postRestURL "//?fmt=tgz&resolve=skip" emailedcontacts.tgz
/opt/zimbra/bin/zmmailbox -z -m bec@xzer0.co.uk postRestURL "//?fmt=tgz&resolve=skip" contacts.tgz
.. on the destination server. Notice the use of the "skip" argument, make sure to use this and not the "reset" argument or else all existing mails will be deleted first! Time to do a final verfication between the servers to get some confidence... we'll use zmmailbox and imapsync and see where we're at:

zmmailbox:
zimbra@kronenbourg:~/log$ zmmailbox -z -m user2@somedomain.co.uk
mailbox: user2@somedomain.co.uk, size: 2.15 GB, messages: 23912, unread: 210
vs
zimbra@kronenbourg:~/log$ zmmailbox -z -m user2@somedomain.co.uk
mailbox: user2@somedomain.co.uk, size: 2.15 GB, messages: 23024, unread: 210

imapsync:
/usr/bin/imapsync --host1 192.168.0.66 --port1 993 --host2 192.168.0.64 --port2 993 --user1 user2@somedomain.co.uk --password1 password1 --user2 user2@somedomain.co.uk --password2 password2 --ssl1 --ssl2 --authmech1 PLAIN --authmech2 PLAIN --dry --justfolders

...

Host1 Nb messages:           23893 messages
Host1 Total size:       2306660541 bytes (2.148 GiB)
Host1 Biggest message:    33794974 bytes (32.229 MiB)

vs

Host2 Nb messages:           23893 messages
Host2 Total size:       2306685481 bytes (2.148 GiB)
Host2 Biggest message:    33794974 bytes (32.229 MiB)
So zmmailbox tells us we still have missing messages, whereas imapsync tells us we're good (well apart from a 25k discrepency). I haven't discovered why I get these differing results but as it stands right now I'm happy enough with the migration, I just hope this helps someone else.

Conclusion: Don't really have an awful lot of faith in the Zimbra CLI tools!

Tuesday, July 16, 2013

mirror/scrape/backup/download/archive all Atlassian JIRA issues (and attachments) present in my activity stream

I was looking for a way to make a full local backup of all JIRAs that are mentioned in my JIRA activity feed (which equates to all JIRA issues I have ever worked on or updated/commented/etc.).

Pre-requisites:

To accomplish this I've used standard Unix/POSIX tools, so you'll need to use Cygwin if you're doing this on Windows (everybody that runs Windows has Cygwin installed anyway, right? :)).

wget: Make sure you are using a recent version of wget as we use quite a few command line switches, I'm using v1.13.4.
xmllint: Used for executing XPath against the returned XML DOM of our activity feed.
tr, awk: For standard stream processing / etc.
Atlassian JIRA: My tests are against v6.0 of the "On Demand" version of the product (in other words hosted by Atlassian), I'm hoping/guessing this will work for a locally managed "Download" version. You must have the "activity stream" gadget installed and accessible for the user profile for which you are performing this against.

Commands:

Make a directory to store the output:
mkdir jiraSuck
Login to Atlassian JIRA website by providing your username and password via POST data, saving the cookies so we can maintain the session (obviously replace my username with the your required username).
(please note in my example the JIRA server I am querying is the "On Demand" type which means it is hosted by Atlassian as a subdomain of atlassian.net):
wget --keep-session-cookies --max-redirect 0 --no-check-certificate --save-cookies cookies.txt --post-data 'username=sgillibrand&password=YOURPASSWORD' https://JIRASUBDOMAIN.atlassian.net/login
Get an XML stream of ALL your activity by asking for a large max results figure (999999):
wget --no-check-certificate --load-cookies cookies.txt -O jiraActivity.xml "https://JIRASUBDOMAIN.atlassian.net/activity?maxResults=999999&streams=user+IS+sgillibrand&os_authType=basic&title=undefined"
We accomplish quite a few things with this next line. Note, as the returned XML representation of our activity is littered with numerous namespaces I'm using the local-name() functionality of XPath to enable us to operate in a namespace agnostic way. So first we extract all the HREFs/URLs for each JIRA issue mentioned in each activity entry, then we place each URL on a seperate line, delete the 'href=' and remove all double quotes. Next we remove all duplicate URLs and finally create an alternate URL based upon the current URL which will give us a printable version of the JIRA issue (this is handy is it contains all field data expanded) - all the output is redirected to a file jiraUrls.txt:
xmllint --xpath //\*\[local\-name\(\)\=\'entry\'\]/\*\[local\-name\(\)\=\'target\'\]/\*\[local\-name\(\)\=\'link\'\]/@href jiraActivity.xml | tr " " \\n | awk 'sub(/href=/, "")' | awk 'gsub(/"/, "")' | awk '!x[$0]++' | awk -F "/" '{printf "%s\n%s/%s/%s/si/jira.issueviews:issue-html/%s/%s.html\n",$0,$1,$2,$3,$5,$5}' >jiraUrls.txt
Change into our newly created directory:
cd jiraSuck
Now scrape all the JIRA issue standard HTML, printable HTML and specified attachments - this can take some time!
(change the acceptable file extensions and domains to suit your needs):
wget --no-check-certificate -nc -r -k -p -l 1 -E --accept=.jpg,.png,.zip,.7z,.rar,.html,.htm,.xls,.ppt,.xlsx,.doc,.docx,.pptx --restrict-file-names=windows domains=atlassian.net --load-cookies ../cookies.txt -i ../jiraUrls.txt

Some time later..............
FINISHED --2013-07-16 16:30:46--
Total wall clock time: 30m 49s
Downloaded: 856 files, 373M in 25m 6s (253 KB/s)

Explanation of the resultant directory structure:

browse directory contains the normal HTML view of each JIRA issue.
si directory contains the printable view of each JIRA issue.
secure contains any attachments associated with each JIRA issue.

All pertinent links have been converted to point relatively to your local directory structure.

Job done :)

Wednesday, July 03, 2013

Tracking focus in Java

After a lot of experimenting I found a reliable way to track which controls are getting focus:
Component currFocus = null;

KeyboardFocusManager.getCurrentKeyboardFocusManager().addPropertyChangeListener(
  new PropertyChangeListener(){
    public void propertyChange(PropertyChangeEvent e){
      String prop = e.getPropertyName();
      if ((prop.equals("focusOwner"))){
        if(e.getNewValue() != null){
          currFocus = (Component)e.getNewValue();
        }
      }
    }
  }
);

Tuesday, February 19, 2013

Create a windows form initially hidden

I've recently had the need to create a hidden windows .NET form (from creation) and found numerous suggestions, however the most simple and elegant way seems to be:
this->ShowInTaskbar = false;
this->WindowState = System::Windows::Forms::FormWindowState::Minimized
.. before you create the window/message pump with Application::Run().