<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" version="2.0" xml:base="https://www.linuxjournal.com/">
  <channel>
    <title>Configuration Management</title>
    <link>https://www.linuxjournal.com/</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>Orchestration with MCollective, Part II</title>
  <link>https://www.linuxjournal.com/content/orchestration-mcollective-part-ii</link>
  <description>  &lt;div data-history-node-id="1339406" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/kyle-rankin" lang="" about="https://www.linuxjournal.com/users/kyle-rankin" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Kyle Rankin&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
&lt;a href="http://www.linuxjournal.com/content/orchestration-mcollective"&gt;In my last article&lt;/a&gt;, I introduced how MCollective could be used for general
orchestration tasks. Configuration management like Puppet and Chef can help
you bootstrap a server from scratch and push new versions of configuration
files, but normally, configuration management scripts run at particular
times in no particular order. Orchestration comes in when you need to
perform some kind of task, specifically something like a software upgrade,
in a certain order and stop the upgrade if there's some kind of problem.
With orchestration software like MCollective, Ansible or even an SSH for
loop, you can launch commands from a central location and have them run on
specific sets of servers. 
&lt;/p&gt;
&lt;p&gt;
Although I favor MCollective because of its improved
security model compared to the alternatives and its integration with
Puppet, everything I discuss here should be something you can
adapt to any decent orchestration tool. 
&lt;/p&gt;

&lt;p&gt;
So in this article, I expand
on the previous one on MCollective and describe how you can use it to stage all
of the commands you'd normally run by hand to deploy an internal software
update to an application server.
&lt;/p&gt;

&lt;p&gt;
I ended part one on MCollective with describing how you could use it to
push an OpenSSL update to your environment and then restart nginx:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
mco package openssl update
mco service nginx restart
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
In this example, I ran the commands against every server in my environment;
however, you'd probably want to use some kind of MCollective filter to
restart nginx on only part of your infrastructure at a time. In my case, I've
created a custom Puppet fact called hagroup and divided my servers into
three different groups labeled a, b and c, split along fault-tolerance
lines. With that custom fact in place, I can restart nginx on only one group
of servers at a time:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
mco service nginx restart -W hagroup=c
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
This approach is very useful for deploying OpenSSL updates, but hopefully
those occur only a few times a year if you are lucky. What you more likely
will run into as a common task ideal for orchestration is deploying your own
in-house software to application servers. Although everyone does this in a
slightly different way, the following pattern is pretty common. This pattern is
based on the assumption that you have a redundant, fault-tolerant
application and can take any individual server offline for software
updates. This means you use some kind of load balancer that checks the
health of your application servers and moves unhealthy servers out of
rotation. In this kind of environment, a simple, serial approach to
updates
might look something like this:
&lt;/p&gt;

&lt;ul&gt;&lt;li&gt;
&lt;p&gt;
Get a list of all of the servers running the application.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Start with the first server on the list.
&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/orchestration-mcollective-part-ii" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Thu, 08 Jun 2017 13:07:25 +0000</pubDate>
    <dc:creator>Kyle Rankin</dc:creator>
    <guid isPermaLink="false">1339406 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Orchestration with MCollective</title>
  <link>https://www.linuxjournal.com/content/orchestration-mcollective</link>
  <description>  &lt;div data-history-node-id="1339381" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/kyle-rankin" lang="" about="https://www.linuxjournal.com/users/kyle-rankin" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Kyle Rankin&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
I originally got into systems administration because I loved learning about computers, and I
figured that was a career that always would offer me something new to learn. Now many years
later that prediction has turned out to be true, and it seems like there are new things to learn
all the time. In particular, every now and then a new technology comes around that dramatically
changes how sysadmins do their jobs. For instance, in the October 2012 issue of
&lt;em&gt;LJ&lt;/em&gt;, I wrote an article titled
&lt;a href="http://www.linuxjournal.com/content/how-deploy-server"&gt;"How to
Deploy a Server"&lt;/a&gt; where I described the progression of how sysadmins deployed servers from
by-hand bespoke configuration, to images, to post-install scripts and finally with
configuration management.
&lt;/p&gt;
&lt;p&gt;
So in this article, I'm going to expand on that concept to talk about how
to use orchestration tools (in particular, MCollective) to manage orchestration tasks on servers
post install. Many MCollective installation guides already exist, so I won't
repeat that here; instead, my goal is to provide examples of how these tools can
automate administration tasks further and to describe how I personally use them. And although I'm specifically
discussing MCollective, these same concepts can be adapted and applied to any number of
other orchestration tools.
&lt;/p&gt;

&lt;p&gt;
These days, configuration management still is one of the most popular ways for sysadmins to
configure a server, but over time, many administrators started pushing these tools past
configuration management into what's being called orchestration. Orchestration refers to tools
to help you push changes—in particular, software installation and updates—across your
environment in a measured, staged way.
&lt;/p&gt;

&lt;p&gt;
Although some administrators might be fine with pushing
software updates randomly, if you want smooth upgrades, usually you want to follow an approach
where you might update one server first, then if that succeeds, update a few more before updating the
rest. Before you update software, you may want to notify upstream systems so they can stop
sending traffic, and after you update the software, you may want to restart the service. This
process is nothing new; it's just that in the past, administrators would do this by hand by
logging in to machines one by one, or they would write custom scripts. With orchestration tools,
you can perform these same steps from a centralized location.
&lt;/p&gt;

&lt;p&gt;
The line between configuration management and orchestration is bit clearer with tools like
Puppet and Chef than, say, with SaltStack or Ansible. Although Puppet and Chef can run in a masterless
way, the default approach is to have clients check in to a master server periodically to see
whether they comply with the central configuration and if not, to change until they do. Usually
you have clients check in to the master in a somewhat randomized way or otherwise send them a
trigger to apply changes.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/orchestration-mcollective" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Thu, 11 May 2017 09:17:28 +0000</pubDate>
    <dc:creator>Kyle Rankin</dc:creator>
    <guid isPermaLink="false">1339381 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Manage Your Configs with vcsh</title>
  <link>https://www.linuxjournal.com/content/manage-your-configs-vcsh</link>
  <description>  &lt;div data-history-node-id="1197965" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/bill-childers" lang="" about="https://www.linuxjournal.com/users/bill-childers" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Bill Childers&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
If you're anything like me (and don't you want to be?), you probably have more
than one Linux or UNIX machine that you use on a regular basis. Perhaps
you've got a laptop and a desktop. Or, maybe you've got a few servers on which
you have
shell accounts. Managing the configuration files for applications like
mutt, Irssi and others isn't hard, but the administrative overhead just
gets tedious, particularly when moving from one machine to another or setting
up a new machine.
&lt;/p&gt;
&lt;p&gt;
Some time ago, I started using Dropbox to manage and synchronize my
configuration files. What I'd done was create several folders in Dropbox, and
then when I'd set up a new machine, I'd install Dropbox, sync those folders
and create symlinks from the configs in those directories to the desired
configuration file in my home directory. As an example, I'd have a directory
called Dropbox/conf/mutt, with my .muttrc file inside that directory. Then,
I'd create a symlink like &lt;code&gt;~/.muttrc -&gt;
Dropbox/conf/mutt/.muttrc&lt;/code&gt;. This
worked, but it quickly got out of hand and became a major pain in the neck to
maintain. Not only did I have to get Dropbox working on Linux, including my
command-line-only server machines, but I also had to ensure that I made a bunch of
symlinks in just the right places to make everything work. The last straw was
when I got a little ARM-powered Linux machine and wanted to get my
configurations on it, and realized that there's no ARM binary for the
Dropbox sync dæmon. There had to be another way.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
...and There Was Another Way&lt;/span&gt;

&lt;p&gt;
It turns out I'm not the only one who's struggled with this. vcsh developer
Richard Hartmann also had this particular itch, except he came up with a way
to scratch it: vcsh. vcsh is a script that wraps both git and mr into an
easy-to-use tool for configuration file management.
&lt;/p&gt;

&lt;p&gt;
So, by now, I bet you're asking, "Why are you using git for this? That sounds
way too complicated." I thought something similar myself, until I actually
started using it and digging in. Using vcsh has several advantages,
once you get your head around the workflow. The first and major advantage to
using vcsh is that all you really need is git, bash and mr—all of which are
readily available (or can be built relatively easily)—so there's no
proprietary dæmons or services required. Another advantage of using vcsh is
that it leverages git's workflow. If you're used to checking in files with
git, you'll feel right at home with vcsh. Also, because git is powering
the whole system, you get the benefit of having your configuration files
under version control, so if you accidentally make an edit to a file that
breaks something, it's very easy to roll back using standard git commands.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/manage-your-configs-vcsh" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 19 Nov 2013 21:26:49 +0000</pubDate>
    <dc:creator>Bill Childers</dc:creator>
    <guid isPermaLink="false">1197965 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>How to Deploy A Server</title>
  <link>https://www.linuxjournal.com/content/how-deploy-server</link>
  <description>  &lt;div data-history-node-id="1084418" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/kyle-rankin" lang="" about="https://www.linuxjournal.com/users/kyle-rankin" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Kyle Rankin&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
When I write my column, I try to stick to specific hacks or tips you can
use to make life with Linux a little easier. Usually, I describe with pretty
specific detail how to accomplish a particular task including command-line
and configuration file examples. This time, however, I take a
step off this tried-and-true path of tech tips and instead talk about
more-general, high-level concepts, strategies and, frankly, personal opinions
about systems administration.
&lt;/p&gt;

&lt;p&gt;
In this article, I discuss the current state of the art when it
comes to deploying servers. Through the years, the ways that sysadmins have
installed and configured servers has changed as they have looked for ways
to make their jobs easier. Each change has brought improvements based on
lessons learned from the past but also new flaws of its own. Here,
I identify a few different generations of server
deployment strategies and talk about what I feel are the best practices for 
sysadmins.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
The Beginning: by Hand&lt;/span&gt;

&lt;p&gt;
In the beginning, servers were configured completely by hand. When needing
a Web server, for instance, first a sysadmin would go through a
Linux OS install one question at a time. When it came to partitioning, the
sysadmin would labor over just how many partitions there should be and how
much space /, /home, /var, /usr and /boot truly would need for this
specific application. Once the OS was installed, the sysadmin either
would download and install Apache packages via the distribution's package manager
(if feeling lazy) or more likely would download the latest stable
version of the source code and run through the &lt;code&gt;./configure; make; make
install&lt;/code&gt; dance with custom compile-time options. Once all of the software
was installed, the sysadmin would pore over every configuration file and
tweak and tune each option to order. 
&lt;/p&gt;

&lt;p&gt;
Even the server's hostname was labored over with names chosen specifically
to suit this server's particular personality (although it probably was named
after some Greek or Roman god at some point in the sysadmin's
career—sysadmins seem to love that naming scheme). In the end, 
you would have
a very custom, highly optimized, tweaked and tuned server that was more
like a pet to the sysadmin who created it than a machine. This server was
truly a unique snowflake, and a year down the road, when you wanted a second
server just like it, you might be able to get close if the original
sysadmin was still there (and if he or she could remember everything done to
the server during the past year); otherwise, the poor sysadmin who came
next got to play detective. Worse, if that server ever died, you had
to hope there were good backups, or there was no telling how long it would take
to build a replacement.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/how-deploy-server" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 26 Mar 2013 17:37:30 +0000</pubDate>
    <dc:creator>Kyle Rankin</dc:creator>
    <guid isPermaLink="false">1084418 at https://www.linuxjournal.com</guid>
    </item>

  </channel>
</rss>
