Posted on November 7th, 2013 10 comments
So, our SOP for migrating users to new computers is to create a disk image of their old computer. This is the very first thing we do. There are two reasons for this. First it gives us a safety net incase we do something really stupid that wipes out their original data before we have imported it (yes, that has happened). Second, it is actually faster. Not much, but enough to make it worthwhile. We also keep this image around for at-least one month just in case we missed something during the import process. We keep them around longer if space is available, but we will not delete them before one month is up.
After the disk image is created and the computer base image has been installed, we normally mount the disk image and run Migration Assistant to import their user account. Only that doesn’t work in 10.9 Mavericks any more (actually I think it is broken in 10.8.5 as well). The reason is simple, although a bit frustrating that Apple didn’t account for this. The new Migration Assistant logs the current user out before starting. I’m sure there are very good reasons for this. But a side effect is that Mac OS X, rightly so, unmounts all user-mounted disk images when the user is logged out. This means the disk image containing the old computer data is unmounted as well, which is why we can’t find it in Migration Assistant.
I found a solution via some command line tools. It is easy, but unfortunately takes some documentation since it isn’t something you are likely to do that often. So, fire up the Terminal app and use this simple (but not easily remembered) command:
sudo hdiutil attach /Path/To/Image.dmg
The hdiutil command will mount the disk image. The sudo part, if you are not familiar with it, will run the command as the root user. When a disk image is mounted as the root-user it is not automatically unmounted at logout, which makes it available to Migration Assistant. And since Migration Assistant also runs as the root user, this is fine since it can still access all the data it needs.
Posted on September 19th, 2013 1 comment
Just a quick note for anybody that has run into this problem. We bought 2 new HP LaserJet printers. A M601n model and a CP4025. When printing directly to them (via direct ethernet connection) they print perfectly fast, takes about 5 seconds for the printer to start printing and another 3 seconds for the first page to come out (so 8 seconds total print time). When printing through a print server (i.e. one computer talks to the printers, the rest of the workstations talk to the print server computer) printing takes about 30 seconds for the first page to come out.
When doing troubleshooting it was determined that the HP print driver itself was causing the delay. The print job never hit the print server until about 22 seconds after clicking Print. It appears the print driver tries to communicate via SNMP with the printer for some reason. Because the IP address of the “printer” is for the print server, the driver cannot communicate with the printer and is instead talking to the print server. Since the print server doesn’t understand the SNMP request coming in, the driver just keeps trying and eventually times out and prints anyway.
HP is “investigating” the issue. Hopefully they will actually fix it. In the mean time, I have found a work-around. I won’t go into details and step-by-step directions but I will give the overview of how to fix it.
First you need to enable SNMP on your server. Once that is enabled and running edit the /etc/snmp/snmpd.conf file (assuming you are on Mac, Linux might be elsewhere) and add this line towards the end:pass .126.96.36.199.188.8.131.52.184.108.40.206.7.0 /bin/sh /usr/local/bin/snmp_printer
When receiving a request for that OID it will pass it on to the shell script at /usr/local/bin/snmp_printer, whose contents should be:
echo “MFG:Hewlett-Packard;CMD:PJL,PCLXL,PCL,PDF,POSTSCRIPT;CID:HPLJPDLV1;1284.4DL:4d,4e,1;MDL:Generic Printer;CLS:PRINTER;DES:Generic Printer;”
Once that is done you will need to either restart the SNMP agent or reboot the server. After that printing should be normal speed. Basically, we are answering the HP driver and pretending to be a “Generic Printer”. This seems to be enough to get it to move on with the print job. (note: that last echo line that looks like 3 is actually all one line, word wrap is a pain sometimes)
Posted on June 17th, 2013 No comments
The socially acceptable way to play with your food!
So, about 6 months ago myself and two of the guys I work with (one is “lead” for one of our offsite campuses, which basically means he is in charge of making sure everything works; the other is one of our graphics designers) came up with an idea for a new card game. That was nothing new. We are always coming up with half-brained ideas over lunch (monoprice vending machines, for example). Usually about five minutes into the discussion we realize how bad an idea it is and drop it. This time we thought it was a good idea.
Over the next few weeks we hammered out some more ideas over lunches and came up with a game prototype. We played it and really enjoyed it. We spent the next few months refining the game, asking friends and co-workers to play it and give us feedback. What we came up with was a pretty interesting and fun game.
The basic premise is that you are trying to build a complete meal (one entree, two sides, a drink and a dessert) and have the lowest total calories. Sounds pretty easy, except for the sabotage cards. Other players can sabotage your meal by playing certain cards on your food. For example, if you play a side salad (low calories) your friend could play a gravy card on top of it to add calories. I mean, gravy with salad sounds great doesn’t it? That is where much of the fun comes in, discovering some unique, and disgusting, combinations of foods.
So head on over to our KickStarter page and take a closer look. We even have some live game play videos up so you can see the game in action. We would love it if you thought the game was fun enough to back us, but we would love it even more if you told your friends.
Posted on June 10th, 2013 No comments
So for the past few months I have been fighting with the Cisco APs to make Bonjour and Multicast work. What I would notice is that bonjour would “sorta” work and generally would work for a while before stopping altogether. Worse yet, it seemed like the bonjour services that I cared most about (airplay and airprint) were the ones most likely to not work. Other services seemed pretty “solid” by comparison.
A second issue that came to light while testing all this was that iOS devices seemed to drop off the network after a while. For example we would set the iPad to not auto-lock, connect it to WiFi and then let it sit there. After awhile (anywhere between 5 and 30 minutes) they would just disconnect. This was also a big issue as if you were not paying attention you would assume WiFi is still working and then start downloading content over Cellular instead of WiFi.
I’ll start with the second problem as that seemed to be easier to fix, and I will also note that I will not say these solutions are the proper ones, they are simply the ones that seemed to work for me. Determining what was going on was discovered via the
show dot11 associations xx:xx:xx:xx:xx:xx
and looking at the Connected, Activity Timeout and Last Activity counts. What I noticed first of all was the timeout was set to a maximum of 60 seconds. Secondly, what I noticed is that “Last Activity” always increased, even if I was actively browsing the internet or doing other unicast type traffic like pinging.
The best that I was able to come up with was this due to the fact that I had different WPA settings on different SSIDs. So there are two things I did. First I made all WPA settings the same, which is wpa version 2 on the dot11 ssid configurations and then under the interface Dot11RadioX config sections I set the encryption modes to only aes-ccm; I also added a non-VLAN encryption setting that also set the cipher:
encryption mode ciphers aes-ccm
encryption vlan 1 mode ciphers aes-ccm
encryption vlan 40 mode ciphers aes-ccm
This SEEMS to have resolved the issue of activity. Now the Last Activity shows correctly and properly tracks communication on the device. One last thing I did as well “just because” was to up the timeout to 5 minutes:
dot11 activity-timeout unknown default 300
I may not keep this last setting as I don’t know that it is needed anymore. I put it in initially to try and work around the problem. For completeness of the problem, what seemed to be happening was, as I said, the Aironet thought the device wasn’t talking so after 60 seconds it would disconnect. Now, my laptop experienced the same problem but would keep reconnecting forever. The iOS devices seemed to only reconnect a few times and then they seemed like they gave up, assuming the AP was broken since it kept disconnecting it.
Now back to the first problem of bonjour and multicast. This ended up being an incredibly simple fix, though again I’m not sure this is the “correct” fix but it does seem to work. I will also say it may very well require that the above fix is also put in place too. The problem I would run into is that bonjour and multicast worked great on un-encrypted, WEP and WPA Personal networks. On WPA Enterprise it would not work. Under the interface Dot11RadioX configuration I added a single line:
broadcast-key vlan 1 change 60
According to the documentation, this updates all clients with a new broadcast key every 60 seconds for the specified VLAN. So as I said this has fixed the issue with one caveat. It can sometimes take up to about 30 seconds for all the bonjour stuff to show up. That seems to be related to the broadcast key rotation. My best guess is that if the device comes in towards the end of the broadcast key rotation period it doesn’t get the current key and has to wait for the new key. So if you connect to WiFi and then immediately try to AirPrint or AirPlay it may not show up for a few seconds. So far this has not been a big deal.
Posted on May 15th, 2013 No comments
Our database software runs on Microsoft’s .NET platform. The newest version of that software switched to .NET 4.0. While we have not yet updated to that version another church using the same software did last week. They also use the same children’s check-in software that we do. Everything went pretty smooth for them except for one problem that only cropped up after the upgrade on the production server – Check-in no longer worked. Minor issue when you have 800 kids going to be checked in over the weekend services.
Long story short, we worked through various tests and google results and came up with 2 causes for the problem.
Issue #1 – Microsoft thinks all UserAgent strings are short.
This may have been a change in .NET 4 or it may have always been this way, not sure. But in .NET 4 at-least IIS assumes all UserAgent strings will be shorter than 64 characters. We were very confused because the “windowed” Safari (i.e. tapping Safari and browsing) would usually work, but randomly would stop working. The full-screen Safari (going to a web-page and then adding it to the home-screen) would usually not work, but randomly would work. This was caused by the fact that IIS (also possibly new in .NET 4.0) caches the browser capabilities based on the UserAgent string given by the browser. Here is the catch. That is by default limited to 64 characters. So if two different browsers have 2 different UserAgent strings but the differences are AFTER the first 64 characters, they will be treated the same. Take a look at this website for a list of browser strings through the years, going all the way back to the days of AOL. UserAgent strings have always been longer than 64 characters, Microsoft really dropped the ball on this one.
Issue #2 – Apple things full-screen Safari should not have a version number.
Say what? Yes. The UserAgent string for Safari when in full-screen mode (maybe embedded browsers too, not sure) does not include a Version number or Safari build number. Here are samples of the two strings:
Regular Safari: Mozilla/5.0 (iPad; CPU OS 6_1_3 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10B329 Safari/8536.25
Full-screen Safari: Mozilla/5.0 (iPad; CPU OS 6_1_3 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Mobile/10B329
You may also note that character 64, where the UserAgent was being truncated to for caching purposes, is inside the “536.26″ version number of the AppleWebKit, before the point at which it would realize they are different. Prior to .NET 4.0, Microsoft’s browser matching had a much simpler browser detection which detected the Full-screen Safari as a generic “smart” browser. The new detection code is much more specific and detects Full-screen Safari as, essentially, the first web-browser ever created. Meaning completely feature-less. Basically it doesn’t match anything so it gets thrown into the “default” browser match.
What we had to do was build a pattern match that would detect Full-screen Safari and treat it as a real browser. Because we don’t have specific version numbers to test against we treat then all as Safari version 3. In .NET 4.0 Safari 3, 5 and 6 are all treated as Safari 3 browsers. Safari 4 is treated as Safari 4 (but only bumps one capability version number from 1.6 to 1.7), so we played it safe and went with Safari 3 since that is what current versions of Safari are treated as anyway. Here is what we had to do. In the folder that contains your web.config you need to add a App_Browsers folder and then drop the browser file in there for it to be used by IIS. The file can be named anything as long as it ends with .browser though we named ours safari_mobile_fullscreen.browser.
Once we dropped the below code into that folder, the second problem was resolved which solved everything. They reported perfect success on the weekend children’s check-in system.
Posted on April 12th, 2013 No comments
So, you would think with an anime about a bunch of school girls in skimpy dress dancing on a stage that they would be taking every opportunity to show off some inappropriate cloths or body parts. Surprisingly this show has completely clean. There was some “chest grabbing” by one of the girls as a method of getting people to talk, but it really is not in a provocative or erotic way. More of a ”talk or i’ll squeeze harder and cause pain” sort of thing.
Nothing really stands out about the show. It has a decent plot, at-least it starts decent. Three friends find out their school is going to close if it doesn’t get enough interest from prospective students since their enrollment has been declining. They don’t like that one bit so they (rather the main protagonist Honoka) decide they are going to do something about. (I don’t remember if they said it was a girls only school, but I don’t remember seeing any boys.) So of course, how do you get girls interested in going to a girls school? You put girls on stage in flashy costumes… Uhh, yeah, I guess, maybe… possibly.. that works.. It would kind of make more sense to me if it was a bunch of young boys in the audience jumping up and down; but what do I know I’m not a teenage schoolgirl from Japan. Like I said, the plot starts good but that is about as far as it goes, it doesn’t really develop into anything further.
Much like the plot, everything else about the show is decent, but doesn’t really go anywhere. This is one of those shows which I enjoyed watching, but will honestly probably never watch again. Entertaining once, but without a plot that develops it’s kind of hard to have re-watch appeal.
Summary: Good show. Nothing spectacular. Decent story of girls trying to save their school from closing.
Season length: 13 episodes
Episode length: 22 minutes
Language: Japanese with English subtitles
This is one of those shows that kind of pisses me off at the end. Not because they bring somebody back to life at the end just to make a happy ending or anything like that. But because it is a great, clean show right up until the end and then they throw in some garbage just, well, because. I know it probably sounds odd to put it this way, but I get when a show puts garbage and nudity up front: because they want to get people hooked who will hope for more later. I don’t get when they do that at the end because it really doesn’t serve any purpose. Everything is clean for 23 episodes and then on episode 24 they put in some junk. Really? What just to make some shmuck happy and feel like he didn’t waste 6 hours of his life “for nothing”? Bah, whatever. </rant>
That being said, I absolutely loved this show. It did a great job on story line, plot development, comedy, developing characters, pretty much everything. The story revolves around a virtual reality MMORPG (Massive Multiplayer Online Role Playing Game) where people hook up to a virtual reality system so that they can be fully immersed in the game universe. However, something goes wrong shortly after the launch of the latest game. The players are all trapped in the game and will not be allowed to disconnect from the system until somebody beats the game – but that could take months! In the mean time, if you die in the game, you die in the real world so it quickly becomes a dog-eat-dog world with players fighting to survive, literally.
Kirito is a skilled swordsman who has been playing these games for a long time. Does he have the skill to complete the game and beat the final boss? His friends think he might, but he is too afraid of making a mistake and getting people killed. Especially another player that has become very important to him. Can he manage to beat the game, and save this newbie girl that has become the center of his life? Will his skill be good enough to save everyone or will he have to sacrifice people along the way to ensure his own survival?
Back to episode 24. About half way through the episode one of the female characters is being held hostage. The antagonist rips her top off rendering her exposed for I would guess about 5 minutes of the show. Most, if not nearly all, of the time the camera angle is from the back or mostly obscured by hair from the side. At-least one time, however, it pretty much does a full frontal shot from a medium distance. While technically a few strands of hair block things, there is not much left to the imagination of the viewer. If the studio every re-edited this scene to remove that part of obscure things more I would have called this a perfect show.
At the time of writing, this was an extremely popular show on the streaming sites. There are multiple manga volumes, a video game, 2 light novel story arcs (over 13 volumes between the two) and of course the anime. It has been licensed for release in North America with an English dub. And this story did not even begin until mid 2009, with most everything starting late 2010. The primary light novel driving everything has 12 volumes so far. The first 2 is the Aincrad story arc and the next 2 are the Fairy Dance story arc, both of which are covered by this anime. There are another 4 or 5 story arcs covered by the light novels meaning there is an ability for another season of the anime, and indeed there are rumors of a continued show. I say continued show instead of a season number because there is debate on wether or not Fairy Dance was season 2 or part of season 1 (there was no break and indeed it is listed as episodes 14-25 I believe). Anyway the next story arc would likely be Gun Gale Online if you want to try and find definitive information on it.
Did I enjoy the show? Yes, despite the junk in episode 24. Will I watch more of the show if they make new seasons? Definitely.
Summary: Great show and story line. Would watch it again! Lots of action without taking over the story completely.
Season length: 25 episodes
Episode length: 22 minutes
Content: Clean with the exception of episode 24.
Language: Japanese with English subtitles (English dub, probably summer 2013)
This is another show I was really surprised at. Let’s face it, much like Love Live! this is a show about a girls-only high-school. I fully expected to shut it off after the first or second episode from an obscene amount of underwear and skin being flashed around. As I said, I was surprised. There was absolutely nothing improper in the show (as always, take this with a grain of salt – who knows if something happened while I looked away, but going through 12 episodes and never seeing anything is a good sign). The worse I ever saw is part of one of the episodes has most of the girls in their swim-suits washing the tanks to get them all cleaned up and ready for operation.
Girls und Panzer, that is Girls and Tanks, centers around a group of girls trying to save their school. Sound familiar? Seriously though, this story has absolutely no connection to Love Live! and a completely different story line. In Love Live! the issue was the school didn’t have enough students so unless they can get more prospective students interested, the school gets shut down. With Girls und Panzer, the school is going to be shut down – period. Basically the girls get the powers that be to agree that if they can win a tankery competition the school can stay open. Winning means beating every single other school participating, and oh yeah, they haven’t had a tankery program at their school in years.
What is tankery? Well, some schools have tennis, some have soccer, some have baseball and some? Well some have tankery! Tankery is the “sport” of tank warfare and learning to be proper girls through that sport. Uhh, okay. So aside from that, the idea is pretty cool. You get two schools together with a bunch of tanks on each side manned by school kids. They go around a practice field (which includes the town, too bad if your shop gets blown up!) trying to disable the flag-tank of the other team. And yes, they use live rounds! Seriously though, there is no way tankery could actually exist without 80% of the participants dying each round. Even still, it’s kind of like paintball wars but with tanks. I’m thinking that would be a cool game to play!
That is basically the story. Can the girls manage to get all the way to the top and win? Beyond that, they build the characters up pretty well. The primary plot doesn’t develop much, but there are a few side plots that develop pretty nicely. In addition to that, they do some pretty cool “special effects” (if you can call it that in animation) in regards to the tanks, points of view, and creative ways of fighting tank vs. tank. I thoroughly enjoyed the show even though it felt pretty short (12 episodes). I think there are actually 15, but 3 of them are “.5″ episodes where they just rehash the past few episodes and introduce some of the new characters in more detail.
Summary: Fun, clean show. Good entertainment with a creative idea driving it all (tankery).
Season length: 12 episodes
Episode length: 22 minutes
Content: Very clean.
Language: Japanese with English subtitles
Posted on March 26th, 2013 No comments
If you are not familiar with Munki you need to go take a look right now. It is possibly one of the easiest software management tools I have ever seen. It works on the same principal as Apple’s own Software Update system. You build a catalog of packages that you want installed on computers and it makes sure that software is installed and kept up to date. What munki provides beyond that is the ability to have each computer have it’s on manifest of software. A manifest tells it what software to install and what catalogs to pull from (i.e. production, testing, development, etc.).
Recently I spent some time getting our setup ironed out and better designed, and here is what I came up with. I’m not going to go through a step-by-step process for everything I did as this is a more open-ended install setup. Managing computers like this requires a bit more knowledge of the product itself. So you really should take a look at some of the munki documentation, particularly the getting started guide, as that will give you some understanding of the terms I will be using.
Setting up the munki repository
First off, I built a VM to run all this in. It is a simple Fedora 18 install with no GUI. You can use any *nix flavor you want, but I am most familiar with Fedora. The server is named munki and has a few DNS aliases pointing to it as well, which I will get to in a moment. First off I installed netatalk to provide a few AFP shares, but for this use I primarily use it for a single share: the munki repository. I need to be able to manage the repository from my desktop Mac as that is where the command line tools are installed. Sadly there are no tools for managing the munki repository on *nix, only Mac. Since *nix does not have (good) support for things like Apple’s .pkg format, or .dmg even, it would be more trouble than it is probably worth to try and go for a pure *nix setup.
Anyway, I created a /var/www/munki/html folder that will store the repository and shared it with netatalk to just our “staff admin” users. This folder is also shared by apache under the hostname munki.mydomain.com. Next mount the share on the Mac and initialize the repository (or in my case, copy my existing repository to this location). For this I use MunkiAdmin, which provides a nice GUI for working with the munki repository. Now with your repository created you can start building up your packages, catalogs, manifests, etc. I can’t help you with that specifically, but later on I will give you an idea of how we structure our manifests.
One thing to note with this is that I put the /var/www/munki/html folder as it’s own hard disk volume in the VM. Currently my repository is 5.7GB, so to give myself some growing room I created the hard disk as an LVM of 15GB. Doing an LVM means I can increase the size of the hard disk without rebuilding everything.
Setting up your web interface
The same guy who wrote munki also wrote munkiwebadmin, which is a web interface to munki. Primarily it is for monitoring and seeing status information, but you can do some basic manifest editing. That is handy for moving something from one manifest to the other, but since you can’t upload new packages you will probably do most of your true admin stuff in MunkiAdmin. However, I found munkiwebadmin (yes I know, sorry for the confusing names here) to be most powerful in its reporting abilities. I followed the install instructions here with a few minor changes. I created my virtual environment in /var/www/munki, so the final path became /var/www/munki/munkiwebadmin_env. You will also be pointing your repository to /var/www/munki/html so it can find everything. In Apache I setup a new virtual host of munkiadmin.mydomain.com that will handle the WSGI stuff and access to the website and obviously also setup a DNS alias to point to the server.
For the install I used MySQL instead of the default SQLite. The reason for this is that I also installed phpMyAdmin so that I can get CSV exports of the inventory data from munkiwebadmin. We only keep computers for 5 years and then they get sold off, usually to staff at the going eBay price minus a little. Using this list lets me sort the computers by how old they are and then take the bottom 1/5 of the list as the computers to be replaced this year. I can do the same in an SQLite database, just not as easily since that requires logging in and running some command line commands and then copying files across the network.
I also had to make 2 small changes to the source code with you can find here and here. Hopefully by the time anybody works on this these two issues will be fixed. Basically those two changes make it so the manifest list does not include the files created by netatalk and also addresses an issue when using MySQL as your database.
Finally, I built a package that contains the 3 scripts from munkiwebadmin. For building custom packages I use an app called Packages, it is simple and quick. The reason I build a separate package (which I called munkiscripts version 1.0) is that it lets me easily go back and update just the scripts. For example if I decide that I want to have munki do some other pre/post-flight operations I can push out a new munkiscripts package with those updated scripts. In fact I have considered doing this to have the postflight script scan the console log for disk I/O error messages and notify me and potential hard drive failures. Anyway, all I have to do is bump the version number and import it into munki and munki will install the new scripts on the next run. So yes, you will want to add this package to your repository.
Setting up your own SUS server
The last thing I wanted to do was have this server manage all software updates. Currently I have a Mac server doing this, and wasting a ton of disk space at the same time. There is an “application” called reposado that is basically a few linux scripts that handle downloading software updates from Apple and mirroring them locally on your server. I basically followed the getting started instructions. I created a new set of folders for reposado in /var/www/reposado/html and /var/www/reposado/metadata and pointed the configure command at those 2 folders.
I also created a 3rd hard drive in the VM and mounted it in /var/www/reposado. While the munki repository you could have as part of your room VM disk, I recommend the SUS volume be a separate disk for one very specific reason. It’s big and can be downloaded again. At the time of writing this my entire SUS repository is 100G and will only get bigger as time moves on. The disk that backs this volume is set to not be backed up in VMware. If I lose it, I just run the sync command again.
So anyway back to reposado, basically follow the instructions and create an apache share for /var/www/reposado/html. In my case I created yet another virtual host of reposado.mydomain.com to handle these requests and setup the DNS alias. I tend to do things this way as it makes it easier to migrate things from one server to another. For example if I later decide to move reposado off the munki server I just have to update the DNS entry, not every client. Once reposado is setup, run the sync tool and go get a coffee, or lunch, or if you have a slow internet connection maybe a weekend. Like I said, 100GB.
We have all this great stuff happening on our server, but nothing is talking to our server just yet. Three things need to happen to get all your clients integrated happily.
First we need to get munki installed on each client workstation. The software needs to be installed (which is pretty straight forward) and it then needs to be configured to point to your repository. You can do the latter any way you want. You can run a terminal command to set the properties, managed preferences, whatever. I use managed preferences so that I can easily change settings per-client. Munki install docs talk about various ways to do this. Your repository URL will be something like http://munki.mydomain.com.
Second we need to run munki so that it downloads all initial software, including the scripts. This will either happen after about an hour or you can use a terminal command to have it check now. I recommend doing a “check now” on a few clients just to be sure everything is working the way you expect, that way you are not waiting for hours to find out something was configured wrong.
Finally you need to update the software update catalog URL used by Apple. Reposado has some examples of URLs you will be using, yours will be something along the line of http://reposado.mydomain.com/content/catalogs/others/index-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog for Mountain Lion (others will be similar). I use a modified script that runs as a login-hook I found here. Scroll down a few comments to one by the user feyd, that is the one you want (at-least as a starting point).
Reboot just to make sure everything is picked up. Wait a little bit and you should start seeing data populate in munkiwebadmin. Also you can verify that your clients are pulling from your SUS because the Software Update app (or App Store) will tell you where it got the update from, so it should have your reposado URL listed.
Example manifest structure
So the way munki handles everything is by manifest. You can either set a specific manifest or let munki use a default one (computer name, serial number, “site-default”, etc.). But each manifest references packages to be installed/updated/remove as well as catalogs to pull those packages from. It can take some work to get things sorted out in an order that works well. I can’t say what I have is best but it works for me.
First, the reason I ended up with this layout. We have basically 4 different computer groups. Realistically 95% of our workstations are in 2 of those groups. We have a standard install group that every computer gets (this would include computers that are head-less, or otherwise non-user centric). Then we have a staff group that is basically office computers which have people sitting at them. On top of that we have 2 speciality groups: Communications/Graphics/Video Department and Teaching Pastors. Each of those last 2 exist for one very specific reason. The first gets the latest version of the Adobe suite while the others get an older version (saves us some money doing it this way). The latter group get a different set of installed modules for some Bible software we use, so the staff gets the packaged version and those 4 computers (currently) get a manually installed version.
Now, a quick bit of understanding for how manifests work. A manifest references catalogs to determine which version of the package to install. So you might have a production catalog and a testing catalog. You put the upgrade in the testing, roll it out to your “early adopters” group and then when you are satisfied there are no problems move it to your production catalog. A manifest can also include other manifests, however, if an included manifest references any catalogs those catalogs will be search first. So for example, my early adopters manifest is called infotech and everybody else uses staff. The infotech manifest includes staff. infotech also references the testing catalog. If the staff manifest references the production catalog then the production catalog will always be checked first, even if the software is mentioned directly in the infotech catalog.
What I have adopted to resolve this is a “split manifest” system. So these are the manifests I currently have:
- standard_software – Lists all the software, but no catalogs.
- standard – Includes standard_software and references the production catalog.
- office_software – Lists all the software for office computers, but no catalogs.
- staff – Includes the office_software and standard_software manifests, references production catalog.
- infotech – Includes office_software and standard_software manifests, references testing and production catalogs.
- graphics – Includes office_software and standard_software manifests, references graphics and production catalogs. (graphics catalog has the newer version of the Adobe suite)
- teaching – Includes office_software and standard_software manifests, references teaching and production catalogs. (teaching catalog has the different version of the Bible software)
Using managed preferences, it is fairly easy and straightforward to assign computers to various manifests and change them back and forth, though bear in mind munki may get confused by changing people’s manifests dramatically. i.e. if I install a computer as a teaching computer and then switch it to a staff computer, it will stick with the advanced version of the Bible software unless I manually remove it and then munki will re-install the correct version.
So basically ,when I say “split manifest” above what I mean is one manifest to hold the package names and another manifest that the computers point to. So all the *_software manifests are what contain the software, but the computers point to standard, staff, infotech, and graphics. That allows me to easily create a graphics_testing manifest that one computer points to which includes the same child manifests but references the testing catalog first so it gets the updated version.
Apple Software Updates
One other thing munki can do is install Apple Software Updates as well. This is useful if your users do not have admin accounts on the computer, and thus are unable to install the updates themselves. I have not yet rolled this out. It seems like it is not 100% stable from watching the discussion mailing lists. I do not mean stuff is broken and computers break, just that the update process is based upon a “apple does their own thing, we think we have figured it out” principal. So occasionally Apple releases an update that behaves strangely because it doesn’t fit into the mold of “we have figured it out” just yet and the mold has to be slightly modified. I think the worse I have seen is updates that keep showing up over and over again.
There is lots of documentation for how to turn this on (and off) so it may be worth your attention. I will be looking at it again once I have more time, but for us it is not a critical issue since most of our users have admin access on their computers so they install their own updates. As we move forward and (possibly) take that away this will be a bigger issue so we will look into it more.
Posted on March 6th, 2013 No comments
I have a few SQL scripts that provide data to be displayed on our home page. These queries calculate data over the past 4 months and display as a chart for the users to get a quick overview of the information. The problem is these queries take over 10 seconds to run. Obviously this would be a problem. The usual way to deal with this is create a table that has the data you want displayed (the cache) and you update the table in a background process, such as a SQL agent job. When searching for a way to do this online this was the solution everybody recommended and ended up using. My problem is I didn’t want to create a whole bunch of tables to hold the cached data for each one. I needed a way to basically have a “cache” table that holds the results for various queries, each with different types of data (different column types/values, etc.).
What I ended up using was a combination of the FOR XML and OPENXML commands and a single “blob” table to store the data with a timestamp for when it was last updated. Since it will probably be easier to just see an example I will give a short example of how to go about doing this.
DECLARE @Results TABLE (Ordering INT, Campus VARCHAR(30), RealDate DATETIME, HeadCount INT) DECLARE @guid UNIQUEIDENTIFIER SET @guid = '00000000-0000-0000-0000-000000000000' -- Check for cached data, only use it if it is less than 2 hours old. IF EXISTS (SELECT * FROM util_cache WHERE [guid] = @guid AND date_modified >= DATEADD(HOUR, -2, GETDATE())) BEGIN DECLARE @doc AS INT DECLARE @xdoc AS nvarchar(max) SELECT @xdoc = blob FROM util_cache WHERE [guid] = @guid EXEC sp_xml_preparedocument @doc OUTPUT, @xdoc SELECT * FROM OPENXML(@doc, N'/root/Result') WITH (Ordering INT, Campus VARCHAR(30), RealDate DATETIME, HeadCount INT) EXEC sp_xml_removedocument @doc RETURN END -- No cache found, populate @Results with the data that takes awhile to build... INSERT INTO @Results ... -- Save in cache BEGIN TRY BEGIN TRANSACTION IF NOT EXISTS (SELECT * FROM util_cache WHERE [guid] = @guid) BEGIN INSERT INTO util_cache ([guid], date_modified, blob) VALUES (@guid, GETDATE(), (SELECT * FROM @Results FOR XML RAW ('Result'), ROOT)) END ELSE BEGIN UPDATE util_cache SET date_modified = GETDATE() ,blob = (SELECT * FROM @Results FOR XML RAW ('Result'), ROOT) WHERE [guid] = @guid END COMMIT TRANSACTION END TRY BEGIN CATCH END CATCH -- Select data for display SELECT * FROM @Results
So what we are doing is declaring in in-memory table and GUID to use. The in-memory table is what is used to build the actual data if the cache is not valid. The GUID is used to uniquely identify this cache object.
First we check the util_cache table to see if a valid entry for our guid exists. If it does, we take the nvarchar(max) value and throw it into the XML code to convert it back to a SQL result set. The sp_xml_preparedocument stored procedure basically opens the XML stream and parses it into something that OPENXML can use. The sp_xml_removedocument stored procedure closes the XML stream and frees up memory. In the middle is the OPENXML function which takes the XML document and an XML query for which data to return, in our case we want all the Result records under the root element. The WITH clause tells it how to format the data, in this case we just match the table format that we used to store it.
So if the cache data was not found or was expired, we need to build new results. The reason I do everything inside a TRY/CATCH block is so that if something goes wrong with query instead of getting an error page the user just gets and empty result set (i.e. a blank graph). Then the reason I do it in a transaction is to lock the util_cache table (if I understand how things work) so we don’t have 2 threads trying to INSERT at the same time. Converting data into XML is as simple as adding the FOR XML RAW (‘RecordName‘), ROOT clause to the end of the SELECT statement. RecordName is the name of the node to generate for each record, this needs to match what you use in the OPENXML statement.
In reality the setup I have is pretty similar to the above except that everything is inside a stored procedure that accepts a parameter for how old the cached data can be before it is considered stale. In my code that runs on the homepage I use 36 hours (the data doesn’t change that often, so this is fine). Then I have a SQL agent job run every 24 hours that calls the same stored procedure with and passes it a value of 0 which forces the data to be re-build immediately. So the user should never notice a delay, but if the sql job fails to run a single user might notice a 10 second delay once a day.
Posted on February 18th, 2013 No comments
Five students who don’t know each other and don’t really care what they do when they enter High School end up joining the same, non-existant, school club called the Culture Society – which consists of them just sitting around talking and having fun. When we join the story they have all since become friends through the club. Things begin to go wrong for them when an unknown entity decides it is board and starts to play games with their lives, beginning by making them randomly swap bodies. The club members fight their way through this bizarre phenomenon as they learn to deal with their individual pasts and problems. Finally things seem to be going better for them after the entity tells them they will no longer be swapping bodies – but things are only just getting started.
The show is technically clean. I have found that it is easy for me to dismiss certain things by telling myself you see the same thing at the beach, or on TV commercials, or even now-days at the supermarket check-out stand. This show has, that I can recall, two incidents in which nothing is technically shown but falls into that category of “nothing bad really happens but still shouldn’t be approved of”. One incident involves one of the ladies partially undressing and crawling over a table. The other incident (I don’t want to ruin the story so just go with me on the assumption that nothing naughty is going on during this scene) is with a different girl “coming to” without wearing anything up top, the shot of her as she stands up has her covering herself with her arms and is very brief.
Kokoro Connect introduces some interesting concepts into what is essentially a high-school comedy/romance story. What makes a person a person? To paraphrase one of the characters in the story: A person, the essence of who they are, is defined as a combination of their unseen self. Their soul, personality, past events in their life. Yet we identify individuals by physical attributes. When we think of a person we think of their physical appearance, not their intangible attributes that make them who they are. So if those unseen characteristics are removed from one body and placed in another body, who is that person? Are they still the same person or are they somebody else?
A different concept proposed by the kids in this story is specifically to do with one’s past. When something important happens, wether good or bad, to us as people it is fresh in our memory and hearts. We remember vivid details about what physically happened and we feel very strongly about the emotions caused by that event. As we grow older and “move away” from the event, those memories and feelings dull and fade, but they still color our future choices and actions. Someone may fall in love with another person because that person reminds them, on some unconscious level, of a childhood sweetheart they once had. What if this person is returned to his youth for a brief period of time, a time when his feelings for this sweetheart were at their strongest. When he then returns to his current age all those feelings would be once again fresh in his mind, and in his heart. Does he really love this new person in his life, or is it just because she reminds him of someone else? What if the latter is true, what then…?
I enjoyed this show because it brought in some of these interesting discussions as part of the story. It didn’t present them as a “sit down and listen to us talk philosophy” but rather worked them into the story of their lives as they try to cope with these strange events. While the show itself covers things like love, fear of others and things of that nature, the same conversations can be applied to our faith. To my own faith. When I accepted Christ all those feelings and emotions were strong and front-most in my mind. Over the years they have dulled and paled as I have grown older. How would my life change, how would I change my life, rather, if for 3 hours I was returned to that point in time where I had just accepted Christ and then came back to my current self with all those memories and feelings fresh?
Summary: Enjoyed the show, wish it had been longer to explore in more detail some of these (somewhat) philosophical ideas.
Season length: 13 episodes (I just checked and 4 more episodes were released 3 months after the series “ended”, but I have not yet watched #14-17)
Episode length: 22 minutes
Content: Mostly clean, see full description for the discussion of the “unclean” parts.
Language: Japanese with English subtitles
In a world where men serve beasts. In a world where beasts rule as men. In a world devoid of hope. This is the world Kyrie lives in. This is the world Kyrie survives in. He is a human who poses as a beast in order to survive and live peacefully, and that is all he wants is to live peacefully. Morte, also a human, is bent on destroying the world. Not just ridding the world of beasts, but ridding the world of everything. Complete and utter destruction. She believes that the world is such a terrible place that it does not deserve to exist anymore. Morte, the sole member of the World Destruction Committee, accidentally exposes Kyrie’s beastman disguise while trying to escape from the World Salvation Committee. Kyrie has no choice but to flee with Morte rather than be killed by the beastmen, as such he is labelled as a member of the World Destruction Committee and unable to return to his peaceful life.
Shortly after, they meet Taupy, a dwarf bear (teddy bear) who happens to be a bounty hunter, who ends up joining with them because of a misunderstanding as well. Neither Kyrie nor Taupy want to activate the Destruct Code, which has the power to destroy everything, but they join with Morte in an attempt to keep themselves alive as well as to try and reason with Morte and dissuade her from destroying the world.
Through their adventures they make some friends and make some enemies. They all are exposed to people who give them reason to want to activate the Destruct Code as well as some people who give them reason to want to save the world instead. Will Taupy and Kyrie be able to change Morte’s mind, or will they finally come to agree with her and together destroy the world?
As recommended by reader Devin, Sands of Destruction is a pretty good show. It is completely clean with the most risqué part of the show an episode where Morte’s dress rips slightly (showing absolutely nothing) and both Kyrie and Taupy spend the episode trying to repair it and keep it from ripping more, as Morte is un-aware her dress is ripped. There are a handful of “soft” curse words. The show itself is a little on the slow side, but not so much as to be boring. It is really a matter of taste. It is an action-adventure story, however. If you are expecting the kind of action that Fairy Tail brings every episode you will be disappointed. If you are expecting a good wholesome show with action sequences instead of all “story”, you should enjoy this show quite a bit.
Summary: Good clean show that offers 13 entertaining episodes with a story following 3 people who are trying(?) to destroy the world.
Season length: 13 episodes
Episode length: 22 minutes
Content: Completely clean, I always hesitate to recommend anything for kids without parents watching first, but I think it is clean enough to do so.
Haru is a young girl that is bored with life. And like all school girls, at-least all school girls in movies, she has a crush on the most popular boy in her class. While walking home from school she sees a cat that is about to be hit by a truck and manages to save him. To Haru’s surprise he stands up and thanks her for rescuing her, and promises to repay her shortly after he has completed an important errand which he is currently on. Her life is about to get much more interesting.
Haru shortly learns that the cat she saved is the prince of the Cat Kingdom, and his father insists on thanking her by having her join his kingdom: by marrying the prince. Haru must escape the kingdom and return home before sunrise or she will be forced to remain a cat forever, but does she really want to go back home? Her only hope of escape is a cat figurine come to life called The Baron as well as his overweight friend Muta.
While obviously a kids’ story, this actually had a lot of fun moments that I think adults would enjoy as well. The story is well done and provides kids with “that’s funny” scenes while at the same time hinting at some more inside jokes that adults will appreciate as well. I really enjoyed the movie. For the most part I have been happy with the Studio Ghibli films and TV shows, there have been a few that I turned off but that seems to be the exception rather than the rule with them. I have more on order in my Netflix queue from them. For those that are familiar with the Miyazaki film Whisper of the Heart, you will find that The Baron is a familiar character. In fact Whisper of the Heart was so successful that fans wanted another movie that centered around the cat character, with this being the result.
Summary: Clean movie for kids with enough entertainment for adults as well.
Movie length: 75 minutes
Content: Completely clean, another kid movie that is very appropriate to watch with the kids.
Posted on December 28th, 2012 3 comments
I have a Snow Leopard server running, obviously, 10.6.8. It does too many things. It serves AFP, OD, iCal, AddressBook, Wiki, RADIUS, Software Updates, and a few other things. That is just too much stuff on one server that goes down every time I do a little maintenance. What I wanted to do is setup a new Mac Mini server on Mountain Lion and migrate the collaboration services (iCal, AddressBook and Wiki) from the old server to the new one. Easier said than done. It seems all the public documentation is for how to migrate the entire server, not piecemeal.
I did some digging and found some useful scripts in the Server.app. Fire up Terminal and take a look in the /Applications/Server.app/Contents/ServerRoot/System/Library/ServerSetup/MigrationExtras folder. There are various Python, Perl and Ruby scripts to migrate various service information. A few of the ones you might be interested in:
- Web Config
- Calendar (also does Address Book/Contacts)
I will give an example of what I did. I found simple instructions on Apple’s website for how to migrate the Wiki information from 10.6 to 10.7/10.8. I also needed to migrate the Calendar/Contacts information. I did this as a dry run to test and make sure these steps would work so I can test the new server and then I will setup the server clean again and do the final process and go live.
First I mounted the time machine volume for the old Snow Leopard server. From the command line I ran the following:
- cd /Applications/Server.app/Contents/ServerRoot/System/Library/ServerSetup/MigrationExtras
- sudo ./70_calendarmigrator.py –sourceRoot /Volumes/Time\ Machine\ Backups/Backups.backupdb/augustine/Latest/Augustine\ HD/ –sourceVersion “10.6.8″
- sudo chmod -R -a# 0 /Library/Server/Calendar\ and\ Contacts/Data
This ran for about 5 minutes before it finished. You should probably make sure your Calendar and Contacts services are not running, I realized Calendar was running half way through and turned it off. Luckily everything still seemed to work. When the process finished it starts up the Calendar and Contacts services. There were a few things more I had to do to get everything working properly. So I will give an overview of the whole process.
- Use the 70_calendarmigrator.py script to migrate the settings and data.
- Use the Server app to select the appropriate Certificate for the Calendar service (it was set to none which caused proxy errors).
- Use the Server app to give users (or groups) permissions to use the Calendar and Contacts services (by default nobody has access even though they already have calendars).
- Restart the entire server. Technically you can just restart all the various services, but after a migration like this I would just restart the server to make sure all is well.
- Update DNS to point to the new server. (For testing I edited the /etc/hosts file of a client to simulate the DNS update and everything just worked)
Finally, a few caveats I ran into trying to get things to work:
- I ran into an issue with an “CalDAVAccountRefreshQueueableOperation error 500″ in iCal client and “Data Corruption Detected” in the caldav error.log. Once I checked the error.log I found that there was a Geo-location based reminder on my calendar (actually 4) that was causing issues for some reason. I’m not sure why. I had to use the psql command line tool to log into the PostgreSQL database and delete the offending entries from calendar_object table.
- My user works fine in iCal but has an issue in the WebCal interface. I have 2 calendars, but the WebCal interface shows duplicates on the sidebar, though it does not duplicate the actual calendar data. Furthermore, when I try to edit the calendar it says I don’t have permission. This may be related to the issue I ran into above or it may not. Because this was a test server I believe I had already logged into the WebCal client before I imported the data so I may have really fouled things up on my own username. I checked a few other users and none of them have any problems. I was able to delete one of the “ghost” calendars, and I can create events on the real calendar of that pair, but the other 2 calendars (one real, one ghost) neither one shows that I can delete and I’m a little. I tried a full server restart but that did not fix. Again, I think this is just because I used a previously used system before instead of a clean system.
Overall the process was super easy and only took me about 2 hours to figure out and, transfer the data, fix a few issues and be back up on running on the sandbox server. The key to the whole process was finding those migration scripts so I could migrate individual service data.
Here are some tips for getting things fixed if you run into similar Calendar issues. Just for the record, only my personal user account had issues with the Geo-location TODO items. Other users have Geo-location TODO items and they seem to work fine; what the difference is I don’t know. Anyway some tips on how to get into PostgreSQL to fix this. First, to actually login to PostgreSQL run this command:
psql -h /Library/Server/PostgreSQL\ For\ Server\ Services/Socket -U _postgres -d caldav
To find the offending records (in my case, they all had the street name of “Jenkins” so I was able to search on that):
select resource_id from calendar_object where icalendar_text like ‘%Jenkins%’;
You should see it spit back a handful of records. If you see a large number of rows returned, you may want to try and fine-tune your query a bit. Once you have the query down, change the “select resource_id” to “delete” like this to delete those same rows:
delete from calendar_object where icalendar_text like ‘%Jenkins%’;
Once this is done I recommend restarting the Calendar service just to be safe.