PuppetConf 2014 – Day 5

I was pretty wiped out last night and slept in a little this morning, but I still got over to the conference in time to get some breakfast before heading in to this morning’s keynotes. The first speaker was Dan Spurling of Getty Images. He discussed how and why they’ve integrated puppet in to their environment, as well as some of his philosophy on development, operations, and getting everyone to play nice together. The second keynote was delivered by Alan Green of Sony Computer Entertainment America. He also talked about how they use Puppet, but he also discussed how they support the many internal groups and their extremely varied IT needs. After that, Luke came back on to do a Q&A session, which helped give us some more insight into what’s going on in the Puppet universe.

Once the keynotes were done, we headed out to our technical sessions. I started with an introduction to MCollective, which is an asynchronous, queue driven job management service that comes with Puppet. I’ve got some really good ideas on how this will be put to use on my customers’ systems. After that session was done it was time to go get some lunch. I had a little longer of a break, so I headed back to the hotel room to drop off some more exhibitor loot before returning to PuppetConf 2014 to grab some lunch and get ready for the afternoon sessions.

The first session of the afternoon was one that discussed OpenStack and how it can be managed with Puppet. It’s a pretty complex system, but there are modules out on the forge that make it pretty simple and painless to set up, configure, and run using Puppet. Next up was what was supposed to be a tour of Puppet subsystems but really turned in to an overview of part of the execution path of the Puppet agent and master code. It was interesting, but wasn’t really what I was hoping for. After that I headed over to catch a session about managing a multi-tier architecture using Puppet. It seemed like a good idea because we have a lot of that at our customers’ sites. And then there was the session put on by F5 Networks, covering their new REST API for managing their network gear. That is going to come in really useful, and considering you’ll be able to do just about everything you can do on the command line using REST calls, it’s going to rock! Our last session of the day covered Elasticsearch’s ELK platform, and was delivered by Jordan Sissel. This was a product stack that I didn’t know too much about before now, but after this presentation, I’m going to be spinning up a VM to try it out. It looks like it might be a good replacement for Splunk, with a bunch of extra functionality to boot.

It was a good way to close out the conference, and I made my way back to the hotel pretty brain fried from all of the information that has been crammed in to my head over the last 5 days. The conference was great, and I hope I get the opportunity to go to it again next year.

PuppetConf 2014 – Day 4

Even though yesterday was technically the first day of PuppetConf 2014, everything really got going today. We started off by going to the keynotes, which were definitely interesting. The first one was given by Luke Kanies, the founder of Puppet Labs. He talked about where they’ve been, what’s going on now, as well as a little about what’s coming up. After he was done, Gene Kim, author of The Phoenix Project, took the stage to talk about DevOps. I’ve had a low opinion of the DevOps methodology for a while, but after listening to him talk, I think I’m going to have to get his book and reevaluate my opinions. The third keynote was delivered by Kate Matsudaira, CEO of popforms. She went over some career management and improvement strategies.

After the keynotes, we walked around the main hall, checking out the vendors, before heading to our selected sessions. I started by checking out the demos that Puppet Labs were running, got to try out the new Node Classifier (it’s pretty amazing), and joined the Test Pilot program. After the lunch break, I started in on the sessions with one about scaling Puppet for large environments. My next session covered auditing and security related operations using Puppet, including being able to enforce basic security policies through classes.

After that session, I headed upstairs to do some last minute reviewing before taking my Puppet Professional certification exam. I can’t talk about specifics of the exam due to signing an NDA, but let me say that it was pretty challenging, making you think about the questions. I did pass it, and am now certified.

Taking the exam used up pretty much all of the rest of the afternoon, so by the time I was done and had met back up with my colleagues, it was time to head over to an off-site mixer sponsored by Puppet Labs. We all had a good time there, and got to spend some time talking to other conference attendees as well as Puppet Labs employees.

That pretty much wrapped up the day, and I headed back to the hotel to get some sleep to prepare for tomorrow.

PuppetConf 2014 – Day 3

Today was the final day of our Puppet Practitioner class. We went over classifying nodes using roles and profiles. This is something which I’ve already been implementing, and which goes a long way towards taking the pain out of classifying nodes. We also covered the MCollective framework and how it can be used. With it you’re able to execute specific classes on specific nodes, which is something I’m going to be able to put to good use for some of my clients needs.

We also covered manifest testing using the rspec-puppet and serverspec tools. I do a lot of software development in perl, which has a good testing framework that I’ve used extensively. It was a real pleasure to learn about and get to work with ruby and Puppet’s equivalent tools. It was also nice to see that there is coverage analysis data there to help me build better test cases. In my opinion, not doing this kind of testing, and not looking at coverage analysis, is one of the leading causes of poor quality code. No, the tools are not perfect, and yes, you do have to design the tests for the code, but it does allow you to test your code before it ever touches a node, and that’s a Very Good Thing™.

We wrapped up our last few class activities and then had some question and answer time with our instructor. After that, I packed up my stuff and headed down to the main hall to get registered for the actual conference. The training I’ve been taking was scheduled before PuppetConf 2014 started instead of during the conference. That means I won’t be missing any of the conference activities due to being in a classroom. So now I’ve got my swag bag, my badge, and my conference t-shirt, and will be working through a pretty solid schedule of presentations and sessions during the next few days.

PuppetConf 2014 – Day 2

Today at PuppetConf 2014 was another class day. We got in to some really interesting stuff, and by interesting I mean your eyes are about to glaze over if you’re not channeling your inner (or outer) nerd.

Custom facts. I’ve actually worked with these before, but filling in the blanks was nice. Just remember that Ruby returns the value of the last statement evaluated if you don’t have an explicit return in there. That’s also known the reason why I wasted hours and learned how to debug Puppet modules when I implemented my first custom fact.

Hiera. Learn it and use it. Especially since classes can pull their defaults from it. My next thing is to explore using MySQL as a data source instead of yaml. But either way, get that data out of your modules and into hiera. I’ve got a bit of that to do in my lab environment at home.

File manipulation. Man, this is where the fun starts, not that custom facts and hiera aren’t fun, but really, if you want to do something that makes actual, real, useful changes on your systems, this is it. Between file_line, concatenating fragments to build files, and managing config files with Augeas, you’re pretty well covered for all of your file manipulation needs. I’m going to be experimenting with using Augeas to build DNS zone files for my home lab environment really soon now. I’m starting to think that it could be used for internal DNS in a datacenter environment. It’ll be a lot easier than the manual processes that I see in use currently.

Tomorrow is our last day of class, and then the conference starts. I’ve got a certification exam scheduled for later in the week, and while I was feeling pretty good about my chances, I think this class has definitely helped out. We’ll see once I actually get to take it.

PuppetConf 2014 – Day 1

I’m spending the week out in San Francisco for PuppetConf 2014. Puppet is a configuration management system used to help keep large collections of servers configurations in sync, as well as manage application deployment, users, and pretty much any other resource you can think of.

I’m here early because I’m taking one of their instructor-led training classes in order to fill in some of the blanks in my self-taught skill set. We had a rough start today because the wireless network that the hotel set up for us was not configured correctly for running training labs. It’s hard to spin up VMs on your laptops and get them on the network when you’re limited to one IP per vPort. We did finally get it working and are mostly caught up to where we should have been for today’s lesson plans.

After getting signed in and collecting my t-shirt, I headed up to my assigned classroom. We first worked through a review of some Puppet basics and then moved on to data structures like arrays and hashes and using virtual resources to simplify complex declarations. The class, and Puppet in general, are very Git centric, which is nice to see. Since you’re, in effect, turing your site’s configuration in to code, you need to put that code under some form of revision control, and Git is probably the best system out there. I’m not going to go down that rabbit hole, but seriously, it works really well.

The hotel’s facilities are really nice, with lots of meeting spaces for the classes and even some pretty good catering for breakfast and lunch. We finally broke for the day a little after 4:00 PM, and will be back tomorrow to do this all over again. I’m already filling in some of the blanks, and have a few modules at home that need to be rewritten…

Amtrak Day 2

Woke up this morning and headed over to the dining car for some breakfast and coffee. We were running a little late, so I called ahead and got Joi to head down to Bloomington to pick me up. It was good to see Reese, and man he’s grown! We went up to the Peoria Airport so that I could pick up the rental car. After that we went back to the house and chilled before getting some dinner. I did some work on Reese’s computer, getting the OS updated, making sure everything else was in good shape, and general fine tuning. After that we went over his packing and got everything ready for the week in Chicago.

LinkedIn’s Epic Security And Privacy Fail

LinkedIn released a new version of their iOS app a few days ago that includes a feature named Intro. It’s an interesting feature, integrating LinkedIn data to your emails. But the way they’re doing it is a spectacular fail. When you enable it, they add a profile to your iOS system that proxies all of your email through their servers. Yeah, they send your mail to them, scan it, modify it, then send it back. They claim it’s encrypted for privacy, but really, that’s a really lame claim since they’ve got to decrypt it in order to scan and inject their content in to it. Do we really need to go over all the ways that this is a seriously bad idea? I’ll leave it up to the reader (all two or three of you, based on traffic stats) to decide whether or not you want a third party to have access to all of your electronic correspondence. It’s not like the NSA couldn’t put hooks in LinkedIn’s servers or anything.

I’ve deleted the app and won’t install it again. I’ve also checked my settings to make sure there are no additional profiles installed. If you want to see if it’s got its hooks installed, go to Settings->General->Profiles. If there are any LinkedIn profiles, delete them.

Here is a link to their official uninstall instructions.

Moving Everything To A New Server

Everything should be running now. I upgraded the hardware that this, and the rest of my domains are hosted on. More RAM, faster CPU cores, and faster disk. Moving was pretty painless, though I’m still finding a few small behind the scenes things that have no impact on the Internet visible services. Next up is moving domains to another registrar and getting IPv6 enabled here…

A Weekend Of VMware Fun

I decided to do some work on my VMware lab box, namely installing the 2TB drive I picked up for the VMware Data Protection appliance, getting the appliance up and running, and installing 5.1 Update 1. I was thwarted in most of that due to a faulty motherboard. I’ve got the drive in, the appliance installed, and the update applied, but the ESXi box is down due to a bad motherboard. It gets stuck in an infinite reboot cycle when hyperthreading is turned on or when I push the system by bringing up the backup appliance. I’ve disabled hyperthreading and can get it to boot, but it still crashes under even minor load. It’s not a purple screen, it just drops, like the reset switch has been pressed. I think I’ve got it back up and running with hyperthreading disabled and only three cores active. We’ll see how long that lasts…

Looks like I’ll be opening a case with Asus on Monday morning to get the board replaced.

Western Digital MyBook Studio Edition II Epic Fail

As part of the laptop to desktop transition, I wanted more storage for the Mac Mini, but I wanted some redundancy, so I picked up a Western Digital MyBook Studio Edition II. It came in yesterday after a bobble with FedEx on Friday. Seriously, FedEx, if the package is shipped to a residential address and requires a signature, 10:30 AM on a work day pretty much guarantees that you’re going to be coming back out. That being said, I am very impressed with the fit and finish of the actual device. It looks good, is very quiet, and was easy to set up.

And that’s where the wheels came off. I couldn’t keep the finder stable with the raid manager running, so I uninstalled it. It then started playing nice, or at least I thought so. Until I checked the overnight SuperDuper! backup. It failed because the MyBook nuked the entire FireWire bus. So at that point I yanked it, installed it on the HTPC via eSATA, changed it to RAID 0, and started a backup of my DVD library (all 4 terabytes of it). It did all of that without a hitch, so it’s going to stay attached there so that I’ve got a backup copy of months of DVD and BluRay ripping.

Oh, and I’ve got a Newer Tech Guardian Maximus on order for more storage for the Mini. It should be in on Wednesday.