<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" version="2.0" xml:base="https://www.linuxjournal.com/">
  <channel>
    <title>Servers</title>
    <link>https://www.linuxjournal.com/</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>Papa's Got a Brand New NAS: the Software</title>
  <link>https://www.linuxjournal.com/content/papas-got-brand-new-nas-software</link>
  <description>  &lt;div data-history-node-id="1340119" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/kyle-rankin" lang="" about="https://www.linuxjournal.com/users/kyle-rankin" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Kyle Rankin&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Who needs a custom NAS OS or a web-based GUI when command-line
NAS software is so easy to configure?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
In a recent letter to the editor, I was contacted by a reader who
enjoyed my &lt;a href="https://www.linuxjournal.com/content/papas-got-brand-new-nas"&gt;"Papa's
Got a Brand New NAS"&lt;/a&gt; article, but wished I had
spent more time describing the software I used. When I
wrote the article, I decided not to dive into the software too much,
because it all was pretty standard for serving files under Linux.
But on second thought, if you want to re-create what I made, I
imagine it would be nice to know the software side as well, so this article
describes the software I use in my home NAS.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
The OS&lt;/span&gt;

&lt;p&gt;
My NAS uses the &lt;a href="https://www.hardkernel.com/main/products/prdt_info.php"&gt;ODROID-XU4&lt;/a&gt; as the main computing platform, and so
far, I've found its octo-core ARM CPU and the rest of its resources
to be adequate for a home NAS. When I first set it up, I visited the
&lt;a href="https://wiki.odroid.com/odroid-xu4/odroid-xu4"&gt;official wiki
page&lt;/a&gt; for the computer, which provides a number of OS
images, including Ubuntu and Android images that you can copy onto a
microSD card. Those images are geared more toward desktop use,
however, and I wanted a minimal server image. After some searching,
I found a &lt;a href="https://forum.odroid.com/viewtopic.php?f=96&amp;t=17542"&gt;minimal image for what was the current Debian stable
release at the time (Jessie)&lt;/a&gt;.
&lt;/p&gt;


&lt;p&gt;
Although this minimal image worked okay for me, I don't necessarily
recommend just going with whatever OS some volunteer on a forum
creates. Since I first set up the computer, the Armbian project has
been released, and it supports a number of standardized OS images for quite
a few ARM platforms including the ODROID-XU4. So if you
want to follow in my footsteps, you may want to start with the &lt;a href="https://www.armbian.com/odroid-xu4"&gt;minimal Armbian
Debian image&lt;/a&gt;.
&lt;/p&gt;

&lt;p&gt;
If you've ever used a Raspberry Pi before, the process of setting
up an alternative ARM board shouldn't be too different. Use another
computer to write an OS image to a microSD card, boot the ARM board,
and at boot, the image will expand to fill the existing filesystem.
Then reboot and connect to the network, so you can log in with the default
credentials your particular image sets up. Like with Raspbian builds,
the first step you should perform with Armbian or any other OS image
is to change the default password to something else. Even better,
you should consider setting up proper user accounts instead of
relying on the default.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/papas-got-brand-new-nas-software" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 29 Oct 2018 12:00:00 +0000</pubDate>
    <dc:creator>Kyle Rankin</dc:creator>
    <guid isPermaLink="false">1340119 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Stop Killing Your Cattle: Server Infrastructure Advice</title>
  <link>https://www.linuxjournal.com/content/stop-killing-your-cattle</link>
  <description>  &lt;div data-history-node-id="1340062" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/kyle-rankin" lang="" about="https://www.linuxjournal.com/users/kyle-rankin" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Kyle Rankin&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;It's great to treat your infrastructure like cattle—until it comes to
troubleshooting.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
If you've spent enough time at DevOps conferences, you've heard the phrase "pets
versus cattle" used to describe server infrastructure. The idea behind this
concept is that traditional infrastructure was built by hand without much
automation, and therefore, servers were treated more like special pets—you
would do anything you could to keep your pet alive, and you knew it by name because
you hand-crafted its configuration. As a result, it would take a lot of effort
to create a duplicate server if it ever went down. By contrast, modern DevOps
concepts encourage creating "cattle", which means that instead of unique,
hand-crafted servers, you use automation tools to build your servers so that no
individual server is special—they are all just farm animals—and
therefore, if a
particular server dies, it's no problem, because you can respawn an exact copy
with your automation tools in no time.
&lt;/p&gt;

&lt;p&gt;
If you want your infrastructure and your team to scale, there's a lot of
wisdom in treating servers more like cattle than pets. Unfortunately, there's
also a downside to this approach. Some administrators, particularly those that
are more
junior-level, have extended the concept of disposable servers to the point
that it has affected their troubleshooting process. Since servers are
disposable, and sysadmins can spawn a replacement so easily, at the first hint of
trouble with a particular server or service, these administrators destroy and
replace it in hopes that the replacement won't show the problem. Essentially,
this is the "reboot the Windows machine" approach IT teams used in the 1990s
(and Linux admins sneered at) only applied to the cloud.
&lt;/p&gt;

&lt;p&gt;
This approach isn't dangerous because it is ineffective. It's dangerous
exactly because it often works. If you have a problem with a machine and
reboot it, or if you have a problem with a cloud server and you destroy and
respawn it, often the problem does go away. Because the approach appears to
work and because it's a lot &lt;em&gt;easier&lt;/em&gt; than actually performing troubleshooting
steps, that success then reinforces rebooting and respawning as the first
resort, not the last resort that it should be.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/stop-killing-your-cattle" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 11 Sep 2018 12:00:00 +0000</pubDate>
    <dc:creator>Kyle Rankin</dc:creator>
    <guid isPermaLink="false">1340062 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Why You Should Do It Yourself</title>
  <link>https://www.linuxjournal.com/content/why-you-should-do-it-yourself</link>
  <description>  &lt;div data-history-node-id="1339869" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/kyle-rankin" lang="" about="https://www.linuxjournal.com/users/kyle-rankin" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Kyle Rankin&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Bring back the DIY movement and start with your own Linux servers.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
It wasn't very long ago that we lived in a society where it was a given
that average people would do things themselves. There was a built-in
assumption that you would perform basic repairs on household items, do general
maintenance and repairs on your car, mow your lawn, cook your food and
patch your clothes. The items around you reflected this assumption with
visible and easy-to-access screws, spare buttons sewn on the bottom of
shirts and user-replaceable parts.
&lt;/p&gt;

&lt;p&gt;
Through the years though, culture has changed toward one more focused on
convenience. The microeconomic idea of "opportunity cost" (an idea that
you can assign value to each course of action and weigh it against
alternative actions you didn't take) has resulted in many people who
earn a reasonable wage concluding that they should do almost nothing
themselves.
&lt;/p&gt;

&lt;p&gt;
The typical thinking goes like this: if my hourly wage is
higher than the hourly cost of a landscaping service, even though that
landscaping service costs me money, it's still &lt;em&gt;cheaper&lt;/em&gt; than if I
mowed my own lawn, because I could somehow be earning my hourly wage
doing something else. This same calculation ends up justifying oil-change and landscaping services, microwave TV dinners and replacing
items when they break instead of repairing them yourself. The result
has been a switch to a service-oriented economy, with the advent of cheaper,
more disposable items that hide their screws and vehicles that are all
but hermetically sealed under the hood.
&lt;/p&gt;

&lt;p&gt;
This same convenience culture has found its way into technology, with
entrepreneurs in Silicon Valley wracking their brains to think of
some new service they could invent to do some new task for you. Linux
and the Open Source movement overall is one of the few places where you
can still find this do-it-yourself ethos in place.
&lt;/p&gt;

&lt;p&gt;
When referring to
proprietary software, Linux users used to say "You wouldn't buy a car with
the hood welded shut!" With Linux, you can poke under the hood and see
exactly how the system is running. The metaphorical screws are exposed,
and you can take the software apart and repair it yourself if you are so
inclined. Yet to be honest, so many people these days &lt;em&gt;would&lt;/em&gt; buy a car
with the hood welded shut. They also are fine with buying computers and
software that are metaphorically welded shut all justified by convenience
and opportunity cost.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/why-you-should-do-it-yourself" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Thu, 31 May 2018 12:15:15 +0000</pubDate>
    <dc:creator>Kyle Rankin</dc:creator>
    <guid isPermaLink="false">1339869 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Rapid, Secure Patching: Tools and Methods</title>
  <link>https://www.linuxjournal.com/content/rapid-secure-patching-tools-and-methods</link>
  <description>  &lt;div data-history-node-id="1339631" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/charles-fisher" lang="" about="https://www.linuxjournal.com/users/charles-fisher" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Charles Fisher&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
&lt;em&gt;
Generate enterprise-grade SSH keys and load them into an agent for control
of all kinds of Linux hosts. Script the agent with the Parallel Distributed
Shell (pdsh) to effect rapid changes over your server farm.
&lt;/em&gt;
&lt;/p&gt;

Servers, HOW-TOs, Security, SysAdmin

&lt;p&gt;
It was with some measure of disbelief that the computer science community
greeted the recent &lt;a href="https://en.wikipedia.org/wiki/EternalBlue"&gt;EternalBlue&lt;/a&gt;-related exploits that have torn through
massive numbers of vulnerable systems.
The SMB exploits have kept coming
(the most recent being &lt;a href="http://securityaffairs.co/wordpress/61530/hacking/smbloris-smbv1-flaw.html"&gt;SMBLoris&lt;/a&gt; presented at the last DEF CON, which impacts
multiple SMB protocol versions, and for which Microsoft will issue no
corrective patch.
Attacks with these tools &lt;a href="http://www.telegraph.co.uk/news/2017/05/13/nhs-cyber-attack-everything-need-know-biggest-ransomware-offensive"&gt;incapacitated critical
infrastructure&lt;/a&gt; to the point that patients were even turned away from the British
National Health Service.
&lt;/p&gt;

&lt;p&gt;
It is with considerable sadness that, during this SMB catastrophe, we
also have come to understand that the famous Samba server presented an
exploitable attack surface on the public internet in sufficient numbers for
a worm to propagate successfully. I previously &lt;a href="http://www.linuxjournal.com/content/smbclient-security-windows-printing-and-file-transfer"&gt;have
discussed SMB security&lt;/a&gt;
in &lt;em&gt;Linux Journal&lt;/em&gt;, and I am no longer of the opinion that SMB server processes should run on
Linux.
&lt;/p&gt;

&lt;p&gt;
In any case, systems administrators of all architectures must be able to
down vulnerable network servers and patch them quickly. There is often a
need for speed and competence when working with a large collection of Linux
servers. Whether this is due to security situations or other concerns is
immaterial—the hour of greatest need is not the time to begin to build
administration tools. Note that in the event of an active intrusion by
hostile parties, &lt;a href="https://staff.washington.edu/dittrich/misc/forensics"&gt;forensic
analysis&lt;/a&gt; may be a legal requirement, and no steps
should be taken on the compromised server without a careful plan and
documentation.
Especially in this new era of the black hats, computer
professionals must step up their game and be able to secure vulnerable
systems quickly.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Secure SSH Keypairs&lt;/span&gt;

&lt;p&gt;
Tight control of a heterogeneous UNIX environment must begin with
best-practice use of SSH authentication keys. I'm going to open this section with
a simple requirement. SSH private keys must be one of three types: Ed25519,
ECDSA using the E-521 curve or RSA keys of 3072 bits. Any key that does not
meet those requirements should be retired (in particular, DSA keys must be
removed from service immediately).
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/rapid-secure-patching-tools-and-methods" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 29 Jan 2018 16:45:47 +0000</pubDate>
    <dc:creator>Charles Fisher</dc:creator>
    <guid isPermaLink="false">1339631 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Avoiding Server Disaster</title>
  <link>https://www.linuxjournal.com/content/avoiding-server-disaster</link>
  <description>  &lt;div data-history-node-id="1339604" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/reuven-m-lerner" lang="" about="https://www.linuxjournal.com/users/reuven-m-lerner" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Reuven M. Lerner&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Worried that your server will go down? You should be. Here are some
disaster-planning tips for server owners.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
If you own a car or a house, you almost certainly have
insurance. Insurance seems like a huge waste of money. You pay it
every year and make sure that you get the best possible price for the
best possible coverage, and then you hope you never need to use the
insurance. Insurance seems like a really bad deal—until you have a
disaster and realize that had it not been for the insurance, you
might have been in financial ruin.
&lt;/p&gt;


&lt;p&gt;
Unfortunately, disasters and mishaps are a fact of life in the
computer industry. And so, just as you pay insurance and hope never to
have to use it, you also need to take time to ensure the safety and
reliability of your systems—not because you want disasters to happen,
or even expect them to occur, but rather because you have to.
&lt;/p&gt;

&lt;p&gt;
If your website is an online brochure for your company and then goes
down for a few hours or even days, it'll be embarrassing and
annoying, but not financially painful. But, if your website is your
business, when your site goes down, you're losing money. If
that's the case, it's crucial to ensure that your server and
software are not only unlikely to go down, but also easily recoverable if
and when that happens.
&lt;/p&gt;

&lt;p&gt;
Why am I writing about this subject? Well, let's just say that this
particular problem hit close to home for me, just before I started to
write this article. After years of helping clients around the world to
ensure the reliability of their systems, I made the mistake of not
being as thorough with my own. ("The shoemaker's children go
barefoot", as the saying goes.) This means that just after launching
my new online product for Python developers, a seemingly trivial
upgrade turned into a disaster. The precautions I put in place,
it turns out, weren't quite enough—and as I write this, I'm still
putting my web server together. I'll survive, as will my server and
business, but this has been a painful and important lesson—one that
I'll do almost anything to avoid repeating in the future.
&lt;/p&gt;

&lt;p&gt;
So in this article, I describe a number of techniques I've used to keep
servers safe and sound through the years, and to reduce the chances of a
complete meltdown. You can think of these techniques as insurance for
your server, so that even if something does go wrong, you'll be
able to recover fairly quickly.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/avoiding-server-disaster" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 17 Jan 2018 14:46:51 +0000</pubDate>
    <dc:creator>Reuven M. Lerner</dc:creator>
    <guid isPermaLink="false">1339604 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Thinking Concurrently: How Modern Network Applications Handle Multiple Connections</title>
  <link>https://www.linuxjournal.com/content/thinking-concurrently</link>
  <description>  &lt;div data-history-node-id="1339585" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/reuven-m-lerner" lang="" about="https://www.linuxjournal.com/users/reuven-m-lerner" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Reuven M. Lerner&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
&lt;cite&gt;Reuven explores different types of multiprocessing and looks at the advantages and disadvantages of each.&lt;/cite&gt;
&lt;/p&gt;

&lt;p&gt;
When I first started consulting, and my clients were small
organizations just getting started on the web, they inevitably
would ask me what kind of high-powered server they would need. My clients
were all convinced that they were going to be incredibly popular and
important, and that they would have lots of visitors coming to their
websites—and it was important that their sites would be able to stand up
under this load.
&lt;/p&gt;

&lt;p&gt;
I would remind them that each day has 86,400 seconds. This means that
if one new person visits their site each second, the server will need
to handle 86,400 requests per day—a trivial number, for most modern
computers, especially if you're just serving up static files.
&lt;/p&gt;

&lt;p&gt;
I would then ask, do they really expect to get more than 86,000
visitors per day? The client would almost inevitably answer, somewhat
sheepishly, "No, definitely not."
&lt;/p&gt;

&lt;p&gt;
Now, I knew that my clients didn't need to worry about the size or
speed of their servers; I really did have their best interests at
heart, and I was trying to convince them, in a somewhat dramatic way,
that they didn't need to spend money on a new server. But I did take
certain liberties with the truth when I presented those numbers—for
example:
&lt;/p&gt;

&lt;ul&gt;&lt;li&gt;
&lt;p&gt;
There's a difference between 86,400 visitors in one day, spread out
evenly across the entire day, and a spike during lunch hour, when
many people do their shopping and leisure reading.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Web pages that contain CSS, JavaScript and images—which is all
of them, in the modern era—require more than one HTTP request for
each page load. Even if you have 10,000 visitors, you might well
have more than 100,000 HTTP requests to your server.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
When a simple web site becomes a web application, you need to start
worrying about the speed of back-end databases and third-party
services, as well as the time it takes to compute certain things.
&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;
So, what do you in such cases? If you can handle precisely one
request per second, what happens if more than one person visits
your site at the same time? You could make one of them wait until the
other is finished and then service the next one, but if you have 10
or 15 simultaneous requests, that tactic eventually will backfire on
you.
&lt;/p&gt;

&lt;p&gt;
In most modern systems, the solution has been to take advantage of
multiprocessing: have the computer do more than one thing at a time.
If a computer can do two things each second, and if your visitors are
spread out precisely over the course of a day, then you can handle
172,800 visitors. And if you can do three things at a time, you
suddenly can handle 259,200 visitors—and so forth.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/thinking-concurrently" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 12 Jan 2018 13:16:57 +0000</pubDate>
    <dc:creator>Reuven M. Lerner</dc:creator>
    <guid isPermaLink="false">1339585 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Ansible: the Automation Framework That Thinks Like a Sysadmin</title>
  <link>https://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin</link>
  <description>  &lt;div data-history-node-id="1339558" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/shawn-powers" lang="" about="https://www.linuxjournal.com/users/shawn-powers" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Shawn Powers&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
I've written about and trained folks on various DevOps tools through the years, and
although they're awesome, it's obvious that most of them are designed from the
mind of a developer. There's nothing wrong with that, because approaching
configuration management programmatically is the whole point. Still,
it wasn't until I started playing with Ansible that I felt like it was
something a sysadmin quickly would appreciate.
&lt;/p&gt;


&lt;p&gt;
Part of that appreciation comes from the way Ansible communicates with its
client computers—namely, via SSH. As sysadmins, you're all very familiar
with connecting to computers via SSH, so right from the word
"go", you
have a better understanding of Ansible than the other alternatives.
&lt;/p&gt;

&lt;p&gt;
With that in mind, I'm planning
to write a few articles exploring how to take advantage of
Ansible. It's a great system, but when I was first exposed to it, it wasn't
clear how to start. It's not that the learning curve is steep. In fact,
if anything, the problem was that I didn't really have that much to learn
before starting to use Ansible, and that made it confusing. For example,
if you don't have to install an agent program (Ansible doesn't have any
software installed on the client computers), how do you start?
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Getting to the Starting Line&lt;/span&gt;

&lt;p&gt;
The reason Ansible was so difficult for me at first is because it's so
flexible with how to configure the server/client relationship, I
didn't know what I was supposed to do. The truth is that Ansible doesn't
really care how you set up the SSH system; it will utilize whatever
configuration you have. There are just a couple things to consider:
&lt;/p&gt;

&lt;ol&gt;&lt;li&gt;
&lt;p&gt;
Ansible needs to connect to the client computer via SSH.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Once connected, Ansible needs to elevate privilege so it can configure
the system, install packages and so on.
&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;
Unfortunately, those two considerations really open a can of
worms. Connecting to a remote computer and elevating privilege is a
scary thing to allow. For some reason, it feels less vulnerable when you
simply install an agent on the remote computer and let Chef or Puppet
handle privilege escalation. It's not that Ansible is any less secure,
but rather, it puts the security decisions in your hands.
&lt;/p&gt;

&lt;p&gt;
Next I'm going to
list a bunch of potential configurations, along with the pros and cons
of each. This isn't an exhaustive list, but it should get you thinking
along the right lines for what will be ideal in your environment. I
also should note that I'm not going to mention systems like Vagrant,
because although Vagrant is wonderful for building a quick infrastructure
for testing and developing, it's so very different from a bunch of
servers that the considerations are too dissimilar really to compare.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Some SSH Scenarios&lt;/span&gt;

&lt;p&gt;
&lt;em&gt;1) SSHing into remote computer as root with password in Ansible
config.&lt;/em&gt;
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 05 Jan 2018 13:02:34 +0000</pubDate>
    <dc:creator>Shawn Powers</dc:creator>
    <guid isPermaLink="false">1339558 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Banana Backups</title>
  <link>https://www.linuxjournal.com/content/banana-backups</link>
  <description>  &lt;div data-history-node-id="1339554" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/kyle-rankin" lang="" about="https://www.linuxjournal.com/users/kyle-rankin" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Kyle Rankin&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
In the September 2016 issue, I wrote an article called &lt;a href="http://www.linuxjournal.com/content/papas-got-brand-new-nas"&gt;"Papa's Got a Brand New
NAS"&lt;/a&gt;
where I described how I replaced my rackmounted gear with a small,
low-powered ARM device—the Odroid XU4. Before I settled on that
solution,
I tried out a few others including a pair of Banana Pi computers—small
single-board computers like Raspberry Pis only with gigabit networking
and SATA2 controllers on board. In the end, I decided to go with a
single higher-powered board and use a USB3 disk enclosure with RAID
instead of building a cluster of Banana Pis that each had a single disk
attached. Since I had two Banana Pis left over after this experiment,
I decided to put them to use, so in this article, I describe how I
turned one into a nice little backup server.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
The Hardware&lt;/span&gt;

&lt;p&gt;
Although Raspberry Pis are incredibly popular and useful if you want a small,
low-powered, cheap computer, they have their downsides as network
backup servers. One of the main downsides is low-performance disk and
network speeds. A Raspberry Pi maxes out at 100Mbit on the network and
offers only USB2 ports if you want to add a hard drive. Those limitations
are what drove me to look for other solutions for my home NAS in the first
place, and it's one area where a Banana Pi has an edge. Even though the
modern Raspberry Pi 3 has a faster CPU, the old Banana Pi still beats
it on network and disk I/O. This makes it pretty ideal as a standalone
system for home network backups, depending on your needs.
&lt;/p&gt;

&lt;p&gt;
In my case, I'm not backing up terabytes of media; I just wanted bare-metal
backups of my servers and workstations along with backups of important
documents. The size of your backups is important, because the Banana Pi
is limited to a single SATA2 port, and the board itself can power
only a 2.5" laptop drive. So if you want to stick with local power, you are limited
to 2.5" hard drive sizes. That said, if you were willing to splurge on an
externally powered SATA2 enclosure, you could use a 3.5" drive instead. In
my case, I happened to have an old 2.5" 500Gb laptop drive lying around
that I had since replaced with an SSD. Note that you probably will need to
order the appropriate SATA2 cable to connect your hard drive with your
Banana Pi—it doesn't typically come with the board.
&lt;/p&gt;

&lt;p&gt;
Although I imagine you could just have the board and a laptop drive sitting
on a shelf, I wanted to protect it a bit more than that. Since I have a
3D printer, naturally I went to Thingiverse to see if it had any cases
for a Banana Pi. It turns out someone made just the thing I needed—a
&lt;a href="https://www.thingiverse.com/thing:1323881"&gt;Banana Pi case&lt;/a&gt; that also had mounting points for a 2.5" hard drive. I
printed out the case (in yellow, naturally) and was able to mount the
board and the laptop drive without any issues.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/banana-backups" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 21 Nov 2017 15:58:49 +0000</pubDate>
    <dc:creator>Kyle Rankin</dc:creator>
    <guid isPermaLink="false">1339554 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Sysadmin 101: Patch Management</title>
  <link>https://www.linuxjournal.com/content/sysadmin-101-patch-management</link>
  <description>  &lt;div data-history-node-id="1339545" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/kyle-rankin" lang="" about="https://www.linuxjournal.com/users/kyle-rankin" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Kyle Rankin&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
A few articles ago, I started a Sysadmin 101 series to pass down some fundamental
knowledge about systems administration that the current generation of junior
sysadmins, DevOps engineers or "full stack" developers might not
learn otherwise.
I had thought that I was done with the series, but then the WannaCry
malware came out and exposed some of the poor patch management practices still
in place in Windows networks. I imagine some readers that are still stuck in
the Linux versus Windows wars of the 2000s might have even smiled with a sense
of superiority when they heard about this outbreak.
&lt;/p&gt;

&lt;p&gt;
The reason I decided to
revive my Sysadmin 101 series so soon is I realized that most Linux
system administrators are no different from Windows sysadmins when it comes to patch
management. Honestly, in some areas (in particular, uptime pride), some Linux
sysadmins are even worse than Windows sysadmins regarding patch management.
So in this
article, I cover some of the fundamentals of patch management
under Linux, including what a good patch management system looks like, the
tools
you will want to put in place and how the overall patching process should
work.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
What Is Patch Management?&lt;/span&gt;

&lt;p&gt;
When I say patch management, I'm referring to the systems you have in place to
update software already on a server. I'm not just talking about keeping up with
the latest-and-greatest bleeding-edge version of a piece of
software. Even more
conservative distributions like Debian that stick with a particular version of
software for its "stable" release still release frequent updates that patch
bugs or security holes.
&lt;/p&gt;

&lt;p&gt;
Of course, if your organization decided to roll its own version of a
particular piece of software, either because developers demanded the latest and
greatest, you needed to fork the software to apply a custom change, or you just
like giving yourself extra work, you now have a problem. Ideally you have put
in a system that automatically packages up the custom version of the software
for you in the same continuous integration system you use to build and package
any other software, but many sysadmins still rely on the outdated method of
packaging the software on their local machine based on (hopefully up to date)
documentation on their wiki. In either case, you will need to confirm that your
particular version has the security flaw, and if so, make sure that the new patch
applies cleanly to your custom version.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/sysadmin-101-patch-management" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 14 Nov 2017 12:23:19 +0000</pubDate>
    <dc:creator>Kyle Rankin</dc:creator>
    <guid isPermaLink="false">1339545 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>PSSC Labs' PowerServe HPC Servers and PowerWulf HPC Clusters</title>
  <link>https://www.linuxjournal.com/content/pssc-labs-powerserve-hpc-servers-and-powerwulf-hpc-clusters</link>
  <description>  &lt;div data-history-node-id="1339524" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
In its quest to provide customers the latest and best computing solutions that
deliver relentless performance with the absolute lowest TCO, &lt;a href="http://www.pssclabs.com"&gt;PSSC Labs&lt;/a&gt; has
supercharged two server solutions with next-generation processing power. 
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12237f4.png" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;p&gt;
The breakthrough
technology of Intel's new Xeon Scalable Processors has been integrated into
PSSC Labs' PowerServe HPC line of servers and the PowerWulf line of HPC
clusters, a move that guarantees performance capable of handling cutting-edge
computing tasks, such as real-time analytics, virtualized infrastructure and
high-performance computing. 
&lt;/p&gt;

&lt;p&gt;
Besides the advanced architecture, the new processors
offer a diverse suite of platform innovations for enhanced application performance
including Intel AVX-512, Intel Mesh Architecture, Intel QuickAssist, Intel Optane
SSDs and Intel Omni-Path Fabric. 
&lt;/p&gt;

&lt;p&gt;
Both PSSC Labs solutions are designed for reliable,
flexible, HPC solutions targeted at government, academic and commercial
environments. Some examples of sectors that will benefit from the new performance
include design and engineering, life and physical sciences, financial services and
machine/deep learning.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/pssc-labs-powerserve-hpc-servers-and-powerwulf-hpc-clusters" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 16 Oct 2017 14:43:17 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339524 at https://www.linuxjournal.com</guid>
    </item>

  </channel>
</rss>
