<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" version="2.0" xml:base="https://www.linuxjournal.com/">
  <channel>
    <title>Docker</title>
    <link>https://www.linuxjournal.com/</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>FileRun on Linode</title>
  <link>https://www.linuxjournal.com/content/filerun-docker</link>
  <description>  &lt;div data-history-node-id="1340894" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img loading="lazy" src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/filerun-on-docker.jpg" width="850" height="500" alt="FileRun on Linode" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/david-burgess" lang="" about="https://www.linuxjournal.com/users/david-burgess" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;David Burgess&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;You may want to set up a file server like FileRun for any number of reasons. The main reason, I would think, would be so you can have your own Google Drive alternative that is under your control instead of Google's.&lt;/p&gt;
&lt;p&gt;FileRun claims to be "Probably the best File Manager in the world with desktop Sync and File Sharing," but I think you'll have to be the judge of that for yourself.&lt;/p&gt;
&lt;p&gt;Just to be completely transparent here, I like FileRun, but there is a shortcoming that I hope they will eventually fix. That shortcoming is that there are some, in my opinion, very important settings that are locked away behind an Enterprise License requirement.&lt;/p&gt;
&lt;p&gt;That aside, I really like the ease-of-use and flexibility of FileRun. So let's take a look at it.&lt;/p&gt;
&lt;h2 id="prerequisites-for-filerun-in-docker-"&gt;Prerequisites for FileRun in Docker&lt;/h2&gt;
&lt;p&gt;First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.&lt;/p&gt;
&lt;p&gt;Another thing you’ll need is a domain name, which you can buy from almost anywhere online for a wide range of prices depending on where you make your purchase. Be sure to point the domain's DNS settings to point to Linode. You can find more information about that here: &lt;a href="https://www.linode.com/docs/guides/dns-manager/"&gt;https://www.linode.com/docs/guides/dns-manager/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You’ll also want a reverse proxy set up on your Docker Server so that you can do things like route traffic and manage SSLs on your server. I made a video about the process of setting up a Docker server with Portainer and a reverse proxy called Nginx Proxy Manager that you can check out here: &lt;a href="https://www.youtube.com/watch?v=7oUjfsaR0NU"&gt;https://www.youtube.com/watch?v=7oUjfsaR0NU&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Once you’ve got your Docker server set up, you can begin the process of setting up your VaultWarden password manager on that server.&lt;/p&gt;
&lt;p&gt;There are 2 primary ways you can do this:&lt;/p&gt;
&lt;ol&gt;&lt;li&gt;In the command line via SSH.&lt;/li&gt;
&lt;li&gt;In Portainer via the Portainer dashboard.&lt;/li&gt;
&lt;/ol&gt;&lt;p&gt;We're going to take a look at how to do this in Portainer so that we can have a user interface to work with.&lt;/p&gt;
&lt;p&gt;Head over to &lt;a href="http://your-server-ip-address:9000/"&gt;http://your-server-ip-address:9000&lt;/a&gt; and get logged into Portainer with the credentials we set up in our previous post/video.&lt;/p&gt;
&lt;p&gt;On the left side of the screen, we're going to click the "Stacks" link and then, on the next page, click the "+ Add stack" button.&lt;/p&gt;
&lt;p&gt;This will bring up a page where you'll enter the name of the stack. Below that that you can then copy and paste the following:&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/filerun-docker" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 20 Sep 2022 16:00:00 +0000</pubDate>
    <dc:creator>David Burgess</dc:creator>
    <guid isPermaLink="false">1340894 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Getting Started with Docker Semi-Self-Hosting on Linode</title>
  <link>https://www.linuxjournal.com/content/getting-started-docker-semi-self-hosting-linode</link>
  <description>  &lt;div data-history-node-id="1340875" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img loading="lazy" src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/getting-started-with-docker-semi-self-hosting-on-linode.jpg" width="850" height="500" alt="Getting Started with Docker Semi-Self-Hosting on Linode" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/david-burgess" lang="" about="https://www.linuxjournal.com/users/david-burgess" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;David Burgess&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;With the evolution of technology, we find ourselves needing to be even more vigilant with our online security every day. Our browsing and shopping behaviors are also being continuously tracked online via tracking cookies being dropped on our browsers that we allow by clicking the “I Accept” button next to deliberately long agreements on websites before we can get the full benefit of said site.&lt;/p&gt;
&lt;p&gt;Watch this article:&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Additionally, hackers are always looking for a target and it's common for even big companies to have their servers compromised in any number of ways and have sensitive data leaked, often to the highest bidder.&lt;/p&gt;
&lt;p&gt;These are just some of the reasons that I started looking into self-hosting as much of my own data as I could.&lt;/p&gt;
&lt;p&gt;Because not everyone has the option to self-host on their own, private hardware, whether it's for lack of hardware, or because their ISP makes it difficult or impossible to do so, I want to show you what I believe to be the next best step, and that's a semi-self-hosted solution on Linode.&lt;/p&gt;
&lt;p&gt;Let's jump right in!&lt;/p&gt;
&lt;h2&gt;Setting up a Linode&lt;/h2&gt;
&lt;p&gt;First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.&lt;/p&gt;
&lt;p&gt;Get logged into your Linode account click on "Create Linode".&lt;/p&gt;
&lt;p&gt;Don't have a Linode account?  &lt;a href="https://www.linode.com/lp/brand-free-credit/?utm_source=linux_journal&amp;utm_medium=affiliate&amp;utm_campaign=&amp;utm_content=&amp;utm_term="&gt;Get $100 in credit clicking here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;On the "Create" page, click on the "Marketplace" tab and scroll down to the "Docker" option. Click it.&lt;/p&gt;
&lt;p&gt;With Docker selected, scroll down and close the "Advanced Options" as we won't be using them.&lt;/p&gt;
&lt;p&gt;Below that, we'll select the most recent version of Debian (version 10 at the time of writing).&lt;/p&gt;
&lt;p&gt;In order to get the the lowest latency for your setup, select a Region nearest you.&lt;/p&gt;
&lt;p&gt;When we get to the "Linode Plan" area, find an option that fits your budget. You can always start with a small plan and upgrade later as your needs grow.&lt;/p&gt;
&lt;p&gt;Next, enter a "Linode Label" as an identifier for you. You can enter tags if you want.&lt;/p&gt;
&lt;p&gt;Enter a Root Password and import an SSH key if you have one. If you don't that's fine, you don't need to use an SSH key. If you'd like to generate one and use it, you can find more information about how to do so &lt;a href="https://www.linode.com/docs/guides/use-public-key-authentication-with-ssh/"&gt;here&lt;/a&gt; "Creating an SSH Key Pair and Configuring Public Key Authentication on a Server").&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/getting-started-docker-semi-self-hosting-linode" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 29 Mar 2022 16:00:00 +0000</pubDate>
    <dc:creator>David Burgess</dc:creator>
    <guid isPermaLink="false">1340875 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Build a Versatile OpenStack Lab with Kolla</title>
  <link>https://www.linuxjournal.com/content/build-versatile-openstack-lab-kolla</link>
  <description>  &lt;div data-history-node-id="1340736" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/john-s-tonello" lang="" about="https://www.linuxjournal.com/users/john-s-tonello" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;John S. Tonello&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Hone your OpenStack skills with a full deployment in a single virtual machine.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
It's hard to go anywhere these days without hearing something about the urgent
need to deploy on-premises cloud environments that are agile, flexible and don't
cost an arm and a leg to build and maintain, but getting your hands on a real
OpenStack cluster—the de facto standard—can be downright impossible.
&lt;/p&gt;

&lt;p&gt;
Enter Kolla-Ansible, an official OpenStack project that allows you to
deploy a complete cluster successfully—including Keystone, Cinder, Neutron,
Nova, Heat and Horizon—in Docker containers on a single, beefy virtual
machine. It's actually just one of an emerging group of official OpenStack
projects that containerize the OpenStack control plane so users can deploy
complete systems in containers and Kubernetes.
&lt;/p&gt;

&lt;p&gt;
To date, for those who don't happen to have a bunch of extra servers loaded
with RAM and CPU cores handy, DevStack has served as the go-to OpenStack lab
environment, but it comes with some limitations. Key among those is your
inability to reboot a DevStack system effectively. In fact, rebooting generally
bricks your instances and renders the rest of the stack largely unusable.
DevStack also limits your ability to experiment beyond core OpenStack modules,
where Kolla lets you build systems that can mimic full production capabilities,
make changes and pick up where you left off after a shutdown.
&lt;/p&gt;

&lt;p&gt;
In this article, I explain how to deploy Kolla, starting from the initial
configuration of your laptop or workstation, to configuration of your cluster,
to putting your OpenStack cluster into service.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Why OpenStack?&lt;/span&gt;

&lt;p&gt;
As organizations of all shapes and sizes look to speed development and
deployment of mission-critical applications, many turn to public clouds like
Amazon Web Services (AWS), Microsoft Azure, Google Compute Engine, RackSpace
and many others. All make it easy to build the systems you and your
organization need quickly. Still, these public cloud services come at a
price—sometimes a steep price you only learn about at the end of a billing cycle.
Anyone in your organization with a credit card can spin up servers, even ones
containing proprietary data and inadequate security safeguards.
&lt;/p&gt;

&lt;p&gt;
OpenStack, a community-driven open-source project with thousands of developers
worldwide, offers a robust, enterprise-worthy alternative. It gives you the
flexibility of public clouds in your own data center. In many ways, it's also
easier to use than public clouds, particularly when OpenStack administrators
properly set up networks, carve out storage and compute resources, and provide
self-service capabilities to users. It also has tons of add-on capabilities to
suit almost any use case you can imagine. No wonder 75% of private
clouds are built using OpenStack.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/build-versatile-openstack-lab-kolla" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 07 Aug 2019 17:30:00 +0000</pubDate>
    <dc:creator>John S. Tonello</dc:creator>
    <guid isPermaLink="false">1340736 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Sharing Docker Containers across DevOps Environments</title>
  <link>https://www.linuxjournal.com/content/sharing-docker-containers-across-devops-environments</link>
  <description>  &lt;div data-history-node-id="1340036" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/todd-jacobs" lang="" about="https://www.linuxjournal.com/users/todd-jacobs" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Todd A. Jacobs&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Docker provides a powerful tool for creating lightweight images and
containerized processes, but did you know it can make your development
environment part of the DevOps pipeline too? Whether you're managing
tens of thousands of servers in the cloud or are a software engineer looking
to incorporate Docker containers into the software development life
cycle, this article has a little something for everyone with a passion
for Linux and Docker.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
In this article, I describe how Docker containers flow
through the DevOps pipeline. I also cover some advanced DevOps
concepts (borrowed from object-oriented programming) on how to use
dependency injection and encapsulation to improve the DevOps process.
And finally, I show how containerization can be useful for the
development and testing process itself, rather than just as a
place to serve up an application after it's written.
&lt;/p&gt;


&lt;span class="h3-replacement"&gt;
Introduction&lt;/span&gt;

&lt;p&gt;
Containers are hot in DevOps shops, and their benefits from an
operations and service delivery point of view have been covered well
elsewhere. If you want to build a Docker container or deploy a Docker
host, container or swarm, a lot of information is available.
However, very few articles talk about how to &lt;em&gt;develop&lt;/em&gt; inside the Docker
containers that will be reused later in the DevOps pipeline, so that's what
I focus on here.
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/styles/max_650x650/public/u%5Buid%5D/12282f1%281%29.png" width="650" height="130" alt="""" class="image-max_650x650" /&gt;&lt;p&gt;&lt;em&gt;Figure 1.
Stages a Docker Container Moves Through in a Typical DevOps
Pipeline&lt;/em&gt;&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Container-Based Development Workflows&lt;/span&gt;

&lt;p&gt;
Two common workflows exist for developing software for use inside Docker
containers:
&lt;/p&gt;

&lt;ol&gt;&lt;li&gt;
Injecting development tools into an existing Docker container:
this is the best option for sharing a consistent development environment
with the same toolchain among multiple developers, and it can be used in
conjunction with web-based development environments, such as Red Hat's
codenvy.com or dockerized IDEs like Eclipse Che.
&lt;/li&gt;

&lt;li&gt;
Bind-mounting a host directory onto the Docker container and using your
existing development tools on the host:
this is the simplest option, and it offers flexibility for developers
to work with their own set of locally installed development tools.
&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;
Both workflows have advantages, but local mounting is inherently simpler. For
that reason, I focus on the mounting solution as "the simplest
thing that could possibly work" here.
&lt;/p&gt;

&lt;p&gt;
&lt;strong&gt;How Docker Containers Move between Environments&lt;/strong&gt;
&lt;/p&gt;

&lt;p&gt;
A core tenet of DevOps is that the source code and runtimes that will be used
in production are the same as those used in development. In other words, the
most effective pipeline is one where the identical Docker image can be reused
for each stage of the pipeline.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/sharing-docker-containers-across-devops-environments" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 18 Dec 2018 13:00:00 +0000</pubDate>
    <dc:creator>Todd A. Jacobs</dc:creator>
    <guid isPermaLink="false">1340036 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Everything You Need to Know about Containers, Part III: Orchestration with Kubernetes</title>
  <link>https://www.linuxjournal.com/content/everything-you-need-know-about-containers-part-iii-orchestration-kubernetes</link>
  <description>  &lt;div data-history-node-id="1339997" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;A look at using Kubernetes to create, deploy and manage thousands of
container images.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
If you've read the first two articles in this series, you now should be familiar with &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process"&gt;Linux kernel control groups (Part I)&lt;/a&gt;,
&lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc"&gt;Linux Containers and Docker (Part II)&lt;/a&gt;. But, here's a quick recap: once upon a time, data-center
administrators deployed entire operating systems, occupying entire hardware
servers to host a few applications each. This was a lot of overhead with a
lot to manage. Now scale that across multiple server hosts, and it increasingly
became more difficult to maintain. This was a problem—a problem that
wasn't
easily solved. It would take time for technological evolution to reach
the moment where you are able to shrink the operating system and launch
these varied applications as microservices hosted across multiple containers
on the same physical machine.
&lt;/p&gt;

&lt;p&gt;
In the final part of this series, I explore the method
most people use to create, deploy and manage containers. The concept is typically
referred to as container orchestration. If I were to focus on Docker, on its
own, the technology is extremely simple to use, and running a few images
simultaneously is also just as easy. Now, scale that out to hundreds, if not
thousands, of images. How do you manage that? Eventually, you need to step
back and rely on one of the few orchestration frameworks specifically
designed to handle this problem. Enter Kubernetes.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Kubernetes&lt;/span&gt;

&lt;p&gt;
Kubernetes, or k8s (k + eight characters), originally was developed by
Google. It's an open-source platform aiming to automate container operations:
"deployment, scaling and operations of application containers across
clusters of hosts". Google was an early adopter and contributor to the
Linux Container technology (in fact, Linux Containers power
Google's very own cloud services). Kubernetes eliminates all of the
manual processes involved in the deployment and scaling of containerized
applications. It's capable of clustering together groups of servers hosting
Linux Containers while also allowing administrators to manage those
clusters easily and efficiently.
&lt;/p&gt;

&lt;p&gt;
Kubernetes makes it possible to respond to consumer demands quickly by
deploying your applications within a timely manner, scaling those same
applications with ease and seamlessly rolling out new features, all while
limiting hardware resource consumption. It's extremely modular and can
be hooked into by other applications or frameworks easily. It also provides
additional self-healing services, including auto-placement,
auto-replication and auto-restart of containers.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-containers-part-iii-orchestration-kubernetes" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 28 Nov 2018 12:30:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1339997 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Everything You Need to Know about Linux Containers, Part II: Working with Linux Containers (LXC)</title>
  <link>https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc</link>
  <description>  &lt;div data-history-node-id="1339992" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;&lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process"&gt;Part I of this Deep Dive on containers&lt;/a&gt; introduces
the idea of kernel control groups, or cgroups, and the way you can isolate,
limit and monitor selected userspace applications. Here,
I dive a bit deeper and focus on the next step of process
isolation—that is, through containers, and more specifically, the Linux
Containers (LXC) framework.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
Containers are about as close to bare metal as you can get when running
virtual machines. They impose very little to no overhead when hosting virtual
instances. First introduced in 2008, LXC adopted much of its functionality
from the Solaris Containers (or Solaris Zones) and FreeBSD jails that
preceded it. Instead of creating a full-fledged virtual machine, LXC enables
a virtual environment with its own process and network space. Using
namespaces to enforce process isolation and leveraging the kernel's very own
control groups (cgroups) functionality, the feature limits, accounts for and isolates CPU, memory, disk I/O and network usage of one or more
processes. Think of this userspace framework as a very advanced form of
&lt;code&gt;chroot&lt;/code&gt;.
&lt;/p&gt;

&lt;p&gt;
Note: LXC uses namespaces to enforce process isolation, alongside the kernel's very
own cgroups to account for and limit CPU, memory, disk I/O and network usage
across one or more processes.
&lt;/p&gt;

&lt;p&gt;
But what exactly are containers? The short answer is that containers decouple software
applications from the operating system, giving users a clean and minimal
Linux environment while running everything else in one or more isolated
"containers". The purpose of a container is to launch a limited set
of applications or services (often referred to as microservices) and have
them run within a self-contained sandboxed environment.
&lt;/p&gt;

&lt;p&gt;
Note: the purpose of a container is to launch a limited set of applications or
services and have them run within a self-contained sandboxed environment.
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/styles/max_1300x1300/public/u%5Buid%5D/ContainerModel.png" width="557" height="250" alt="""" class="image-max_1300x1300" /&gt;&lt;p&gt;
&lt;em&gt;Figure 1. A Comparison of
Applications Running in a Traditional Environment to Containers&lt;/em&gt;
&lt;/p&gt;

&lt;p&gt;
This isolation prevents processes running within a given container from
monitoring or affecting processes running in another container. Also, these
containerized services do not influence or disturb the host machine. The idea
of being able to consolidate many services scattered across multiple physical
servers into one is one of the many reasons data centers have chosen to adopt
the technology.
&lt;/p&gt;

&lt;p&gt;
Container features include the following:
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 27 Aug 2018 11:30:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1339992 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>The Search for a GUI Docker</title>
  <link>https://www.linuxjournal.com/content/search-gui-docker</link>
  <description>  &lt;div data-history-node-id="1339996" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/shawn-powers" lang="" about="https://www.linuxjournal.com/users/shawn-powers" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Shawn Powers&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Docker is everything but pretty; let's try to fix that. Here's a rundown of 
some GUI options available for Docker.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
I love Docker. At first it seemed a bit silly to me for a small-scale
implementation like my home setup, but after learning how to use it, I fell
in love. The standard features are certainly beneficial. It's great not
worrying that one application's dependencies will step on or conflict
with another's. But most applications are good about playing well with
others, and package management systems keep things in order. So why do I
&lt;code&gt;docker run&lt;/code&gt; instead of &lt;code&gt;apt-get install&lt;/code&gt;? Individualized system settings.
&lt;/p&gt;

&lt;p&gt;
With Docker, I can have three of the same apps running side by side. They
even can use the same port (internally) and not conflict. My torrent
client can live inside a forced-VPN network, and I don't need to worry that it will
somehow "leak" my personal IP data. Heck, I can run apps that work
only on CentOS inside my Ubuntu Docker server, and it just works! In short,
Docker is amazing.
&lt;/p&gt;

&lt;p&gt;
I just wish I could remember all the commands.
&lt;/p&gt;

&lt;p&gt;
Don't get me wrong, I'm familiar with Docker. I use it for most of my
server needs. It's my first go-to when testing a new app. Heck, I taught
an entire course on Docker for CBT Nuggets (my day job). The problem is,
Docker works so well, I rarely need to interact with it. So, my FIFO
buffer fills up, and I forget the simple command-line options to make
Docker work. Also, because I like charts and graphs, I decided to install
a Docker GUI. It was a bit of an adventure, so I thought I'd share the
ins and outs of my experience.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
My GUI Expectations&lt;/span&gt;

&lt;p&gt;
There are some things I don't really care about for a GUI. Oddly, one of
the most common uses people have for a visual interface is the ability to
create a Docker container. I actually don't mind using the command line
when I'm creating a container, because it usually takes 5–10 attempts
and tweaks before I get it how I want it. So for me, I'd like to have
at least the following features:
&lt;/p&gt;

&lt;ul&gt;&lt;li&gt;
A visual layout of all containers, whether or not they're running.
&lt;/li&gt;

&lt;li&gt;
A way to start/stop/delete containers.
&lt;/li&gt;
&lt;li&gt;
The ability to rename running containers, because I always forget to name
them, and I get tired of seeing "chubby_cheetah" for container names.
&lt;/li&gt;

&lt;li&gt;
A way to change the restart policy easily, so when I finally get a container
right, I can have it &lt;code&gt;--restart=always&lt;/code&gt;.
&lt;/li&gt;

&lt;li&gt;
Show some statistics about the system and individual containers.
&lt;/li&gt;

&lt;li&gt;
Read logs.
&lt;/li&gt;

&lt;li&gt;
Work via web interface, so I can use it remotely.
&lt;/li&gt;

&lt;li&gt;
Be a Docker container itself!
&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;
My list of needs is fairly simple, but oddly, many GUIs left me
wanting. Since everyone's desires are different, I'll go over the most
popular options I tried, and mention some pros and cons.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/search-gui-docker" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 31 Jul 2018 12:00:00 +0000</pubDate>
    <dc:creator>Shawn Powers</dc:creator>
    <guid isPermaLink="false">1339996 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>FOSS Project Spotlight: Pydio Cells, an Enterprise-Focused File-Sharing Solution</title>
  <link>https://www.linuxjournal.com/content/foss-project-spotlight-pydio-cells-enterprise-focused-file-sharing-solution</link>
  <description>  &lt;div data-history-node-id="1339956" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/italo-vignoli" lang="" about="https://www.linuxjournal.com/users/italo-vignoli" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Italo Vignoli&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
Pydio Cells is a brand-new product focused on the needs of enterprises and
large organizations, brought to you from the people who launched the concept
of the open-source
file sharing and synchronization solution in 2008. The concept behind
Pydio Cells is challenging: to be to file sharing what Slack has been to
chats—that is, a revolution in terms of the number of features, power and ease of
use.
&lt;/p&gt;

&lt;p&gt;
In order to reach this objective, Pydio's development team has switched
from the old-school development stack (Apache and PHP) to Google's Go
language to overcome the bottleneck represented by legacy technologies.
Today, Pydio Cells offers a faster, more scalable microservice architecture
that is in tune with dynamic modern enterprise environments.
&lt;/p&gt;

&lt;p&gt;
In fact, Pydio's new "Cells" concept delivers file sharing as a
modern collaborative app. Users are free to create flexible group spaces for
sharing based on their own ways of working with dedicated in-app messaging
for improved collaboration.
&lt;/p&gt;

&lt;p&gt;
In addition, the enterprise data management functionality gives both
companies and administrators reassurance, with controls and reporting that
directly answer corporate requirements around the General Data Protection
Regulation (GDPR) and other tightening data
protection regulations.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Pydio Loves DevOps&lt;/span&gt;

&lt;p&gt;
In tune with modern enterprise DevOps environments, Pydio Cells now runs as
its own application server (offering a dependency-free binary, with no need for
external libraries or runtime environments). The application is available as
a Docker image, and it offers out-of-the-box connectors for
containerized application orchestrators, such as Kubernetes.
&lt;/p&gt;

&lt;p&gt;
Also, the application has been broken up into a series of logical
microservices. Within this new architecture, each service is allocated its
own storage and persistence, and can be scaled independently. This enables
you to manage and scale Pydio
more efficiently, allocating resources to each
specific service.
&lt;/p&gt;

&lt;p&gt;
The move to Golang has delivered a ten-fold improvement in performance. At
the same time, by breaking the application into logical microservices, larger
users can scale the application by targeting greater resources only to the
services that require it, rather than inefficiently scaling the entire
solution.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Built on Standards&lt;/span&gt;

&lt;p&gt;
The new Pydio Cells architecture has been built with a renewed focus on the
most popular modern open standards:
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/foss-project-spotlight-pydio-cells-enterprise-focused-file-sharing-solution" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 13 Jul 2018 14:20:00 +0000</pubDate>
    <dc:creator>Italo Vignoli</dc:creator>
    <guid isPermaLink="false">1339956 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Managing Docker Instances with Puppet</title>
  <link>https://www.linuxjournal.com/content/managing-docker-instances-puppet</link>
  <description>  &lt;div data-history-node-id="1339445" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/todd-jacobs" lang="" about="https://www.linuxjournal.com/users/todd-jacobs" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Todd A. Jacobs&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
In a previous article, "Provisioning Docker with Puppet", in the December
2016 issue, I covered one of the ways
you can install the Docker service onto a new system with Puppet. By
contrast, this article focuses on how to manage Docker images and
containers with Puppet.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Reasons for Integrating Docker with Puppet
&lt;/span&gt;

&lt;p&gt;
There are three core use cases for integrating Docker with Puppet or
with another configuration management tool, such as Chef or Ansible:
&lt;/p&gt;

&lt;ol&gt;&lt;li&gt;
&lt;p&gt;
Using configuration management to provision the Docker service on a
host, so that it is available to manage Docker instances.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Adding or removing specific Docker instances, such as a containerized
web server, on managed hosts.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Managing complex or dynamic configurations inside Docker
containers using configuration management tools (for example, Puppet agent)
baked into the Docker image.
&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;
"Provisioning Docker with Puppet", in the December 2016 issue
of &lt;em&gt;LJ&lt;/em&gt;, covered the first use case. This article is
primarily concerned with the second.
&lt;/p&gt;

&lt;p&gt;
Container management with Puppet allows you to do a number of things that
become ever more important as an organization scales up its systems,
including the following:
&lt;/p&gt;

&lt;ol&gt;&lt;li&gt;
&lt;p&gt;
Leveraging the organization's existing configuration management
framework, rather than using a completely separate process just to
manage Docker containers.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Treating Docker containers as "just another resource" to converge in
the configuration management package/file/service lifecycle.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Installing Docker containers automatically based on hostname, node
classification or node-specific facts.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Orchestrating commands inside Docker containers on multiple hosts.
&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;
Although there certainly are other ways to achieve those goals (see
the Picking a Toolchain sidebar), it takes very little work to extend
your existing Puppet infrastructure to handle containers as part of a
node's role or profile. That's the focus for this article.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Picking a Toolchain
&lt;/span&gt;

&lt;p&gt;
Why focus on container management with Puppet? There certainly are other
ways to manage Docker instances, containers and clusters, including
some native to Docker itself. As with any other IT endeavor, your chosen
toolchain both provides and limits your capabilities. For a home system,
your choice of toolchain is largely a matter of taste, but in the
data center, it's often better to leverage existing tools and in-house
expertise whenever possible.
&lt;/p&gt;

&lt;p&gt;
Puppet was chosen for this series of articles because it is a strong
enterprise-class solution that has been widely deployed for more than a
decade. However, you could do much the same thing with Chef or Ansible
if you choose.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/managing-docker-instances-puppet" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Thu, 20 Jul 2017 13:40:30 +0000</pubDate>
    <dc:creator>Todd A. Jacobs</dc:creator>
    <guid isPermaLink="false">1339445 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Applied Expert Systems, Inc.'s CleverView for TCP/IP on Linux</title>
  <link>https://www.linuxjournal.com/content/applied-expert-systems-incs-cleverview-tcpip-linux-0</link>
  <description>  &lt;div data-history-node-id="1339440" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
The contemporary data center is typified by an ever-increasing amount of traffic
occurring between servers, observes &lt;a href="http://www.aesclever.com"&gt;Applied Expert Systems, Inc.&lt;/a&gt; (AES), sagely.
Fulfilling the logical need to facilitate improved server-to-server
communications, AES created CleverView for TCP/IP on Linux, now at v2.7.
CleverView provides IT staff access to current and historical server
performance and availability details from not only their browser desktops but
also their mobile phones via the CLEVER Mobile for Linux app. 
&lt;/p&gt;

&lt;p&gt;
This version 2.7
features enhancements to DockerView, namely container details including
resource utilization and process information, with the ability to drill down
into specific containers, and image details, including repository and image ID
with historical details. 
&lt;/p&gt;

&lt;p&gt;
Finally, new options to the Enhanced Dashboard include
the ability to download a graph image, manipulate graph formats, display raw
data and a zoom feature with one-click navigation to view Alert Details from
the Alerts Summary graph.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/applied-expert-systems-incs-cleverview-tcpip-linux-0" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 12 Jul 2017 15:13:24 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339440 at https://www.linuxjournal.com</guid>
    </item>

  </channel>
</rss>
