<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" version="2.0" xml:base="https://www.linuxjournal.com/">
  <channel>
    <title>Containers</title>
    <link>https://www.linuxjournal.com/</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>Running GNOME in a Container</title>
  <link>https://www.linuxjournal.com/content/running-gnome-container</link>
  <description>  &lt;div data-history-node-id="1340759" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/adam-verslype" lang="" about="https://www.linuxjournal.com/users/adam-verslype" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Adam Verslype&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Containerizing the GUI separates your work and play.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
Virtualization has always been a rich man's game, and more frugal
enthusiasts—unable to afford fancy server-class components—often
struggle to keep up. Linux provides free high-quality hypervisors, but when
you start to throw real workloads at the host, its resources become
saturated quickly. No amount of spare RAM shoved into an old Dell desktop
is going to remedy this situation. If a properly decked-out host is out of
your reach, you might want to consider containers instead.
&lt;/p&gt;

&lt;p&gt;
Instead of virtualizing an entire computer, containers allow parts of the Linux
kernel to be portioned into several pieces. This occurs without the
overhead of emulating hardware or running several identical kernels. A full
GUI environment, such as GNOME Shell can be launched inside a container,
with a little gumption.
&lt;/p&gt;

&lt;p&gt;
You can accomplish this through namespaces, a feature built in to the Linux
kernel. An in-depth look at this feature is beyond the scope of this
article, but a brief example sheds light on how these features can create
containers. Each kind of namespace segments a different part of the kernel.
The PID namespace, for example, prevents processes inside the namespace
from seeing other processes running in the kernel. As a result, those
processes believe that they are the only ones running on the computer. Each
namespace does the same thing for other areas of the kernel as well. The
mount namespace isolates the filesystem of the processes inside of it. The
network namespace provides a unique network stack to processes running
inside of them. The IPC, user, UTS and cgroup namespaces do the same for
those areas of the kernel as well. When the seven namespaces are combined,
the result is a container: an environment isolated enough to believe it is
a freestanding Linux system.
&lt;/p&gt;

&lt;p&gt;
Container frameworks will abstract the minutia of configuring namespaces
away from the user, but each framework has a different emphasis. Docker is
the most popular and is designed to run multiple copies of identical
containers at scale. LXC/LXD is meant to create containers easily that
mimic particular Linux distributions. In fact, earlier versions of LXC
included a collection of scripts that created the filesystems of popular
distributions. A third option is libvirt's lxc driver. Contrary to how
it may sound, libvirt-lxc does not use LXC/LXD at all. Instead, the
libvirt-lxc driver manipulates kernel namespaces directly. libvirt-lxc
integrates into other tools within the libvirt suite as well, so the
configuration of libvirt-lxc containers resembles those of virtual machines
running in other libvirt drivers instead of a native LXC/LXD container. It
is easy to learn as a result, even if the branding is confusing.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/running-gnome-container" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 07 Aug 2019 19:00:00 +0000</pubDate>
    <dc:creator>Adam Verslype</dc:creator>
    <guid isPermaLink="false">1340759 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>How	to Build an Enterprise Kubernetes Strategy</title>
  <link>https://www.linuxjournal.com/content/how-build-enterprise-kubernetes-strategy</link>
  <description>
&lt;span&gt;How	to Build an Enterprise Kubernetes Strategy&lt;/span&gt;

&lt;span&gt;&lt;a title="View user profile." href="https://www.linuxjournal.com/user/800005" lang="" about="https://www.linuxjournal.com/user/800005" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;LJ Staff&lt;/a&gt;&lt;/span&gt;

&lt;span&gt;Sun, 07/21/2019 - 23:27&lt;/span&gt;

            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;In today’s emerging cloud-native environments, Kubernetes is everywhere.&lt;/p&gt;
&lt;p&gt;Organizations love Kubernetes because it helps significantly increase the agility and efficiency of their software development teams, enabling them to reduce the time and perils associated with putting new software into production. Information technology operations teams love Kubernetes because it helps boost productivity, reduce costs and risks, and moves organizations closer to achieving their hybrid cloud goals.&lt;/p&gt;
&lt;p&gt;Simply put, Kubernetes makes it easier to manage software complexity. As enterprise applications become more complex, development and operations (DevOps) teams need a tool that can orchestrate that complexity. They need a way to launch all the services dependent on these applications, making sure the applications and services are healthy and can connect to one Another.&lt;/p&gt;
&lt;p&gt;Containers have dramatically risen in popularity because they provide a consistent way to package application components and their dependencies into a single object that can run in any environment. By packaging code and its dependencies into containers, a development team can use standardized units of code as consistent building blocks. The container will run the same way in any environment and can start and terminate quickly, allowing applications to scale to any size.&lt;/p&gt;
&lt;p&gt;In fact, development teams are using containers to package entire applications and move them to the cloud without the need to make any code changes. Additionally, containers can make it easier to build workflows for applications that run between on-premises and cloud environments, enabling the smooth operation of almost any hybrid environment.&lt;/p&gt;
&lt;p&gt;You may download this special Kubernetes ebook &lt;a href="https://info.rancher.com/how-to-build-enterprise-kubernetes-strategy-linux-journal-sponsored-content" style="color:blue;"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Rancher Labs has written an ebook about this subject and they’re sharing it with &lt;em&gt;Linux Journal&lt;/em&gt; readers. Topics include:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;The Dangers of Too Many Good Things&lt;/li&gt;
&lt;li&gt;Understanding Your Organization’s Current Kubernetes Adoption 4&lt;/li&gt;
&lt;li&gt;Where Will You Be Running Kubernetes in Five Years?&lt;/li&gt;
&lt;li&gt;Who Should Own the Kubernetes Strategy?&lt;/li&gt;
&lt;li&gt;Centralized vs Decentralized Kubernetes Management&lt;/li&gt;
&lt;li&gt;Containerization and Kubernetes Will Disrupt Some of Your Other Plans&lt;/li&gt;
&lt;li&gt;Our Organization is Heavily Investing in Cloud Computing&lt;/li&gt;
&lt;li&gt;We Are Investing in Hyper-Converged Infrastructure as Part of a Data Center Upgrade&lt;/li&gt;
&lt;li&gt;We Are Trying to Modernize Our Existing Applications to Improve Security and Stability&lt;/li&gt;
&lt;li&gt;We Need to Cut Our Infrastructure/Cloud Spending&lt;/li&gt;
&lt;li&gt;Preparing Your Teams for Broader Kubernetes Adoption&lt;/li&gt;
&lt;li&gt;Evaluating Container Management Platforms and Delivering Kubernetes- as-a-Service&lt;/li&gt;
&lt;li&gt;Kubernetes Distribution, Cluster Provisioning and Lifecycle Management&lt;/li&gt;
&lt;li&gt;Multi-Cluster Kubernetes Management&lt;/li&gt;
&lt;li&gt;User Management and Delegated Administration&lt;/li&gt;
&lt;li&gt;Policy Management&lt;/li&gt;
&lt;li&gt;User Experience and the Entire Cloud Native Stack&lt;/li&gt;
&lt;li&gt;Kubernetes Security and Audit&lt;/li&gt;
&lt;li&gt;Open Source, SaaS, and Support&lt;/li&gt;
&lt;li&gt;A Few Final Thoughts&lt;/li&gt;
&lt;li&gt;Appendix A — Case Study: How Life Sciences Leader Illumina Implemented an&lt;/li&gt;
&lt;li&gt;Enterprise Kubernetes Strategy&lt;/li&gt;
&lt;li&gt;A Complex Puzzle with Many Parts&lt;/li&gt;
&lt;li&gt;Connecting All the Pieces with Rancher and Kubernetes&lt;/li&gt;
&lt;li&gt;How Rancher and Kubernetes Can Work for Any Organization&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;You may download this special Kubernetes ebook &lt;a href="https://info.rancher.com/how-to-build-enterprise-kubernetes-strategy-linux-journal-sponsored-content" style="color:blue;"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
      
  &lt;div class="field field--name-field-tags field--type-entity-reference field--label-above"&gt;
    &lt;div class="field--label"&gt;Tags&lt;/div&gt;
          &lt;div class="field--items"&gt;
              &lt;div class="field--item"&gt;&lt;a href="https://www.linuxjournal.com/tag/containers" hreflang="en"&gt;Containers&lt;/a&gt;&lt;/div&gt;
          &lt;div class="field--item"&gt;&lt;a href="https://www.linuxjournal.com/tag/devops" hreflang="en"&gt;DevOps&lt;/a&gt;&lt;/div&gt;
          &lt;div class="field--item"&gt;&lt;a href="https://www.linuxjournal.com/tag/kubernetes" hreflang="en"&gt;Kubernetes&lt;/a&gt;&lt;/div&gt;
              &lt;/div&gt;
      &lt;/div&gt;
</description>
  <pubDate>Mon, 22 Jul 2019 04:27:42 +0000</pubDate>
    <dc:creator>LJ Staff</dc:creator>
    <guid isPermaLink="false">1340761 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Bringing the Benefits of Linux Containers to Operational Technology</title>
  <link>https://www.linuxjournal.com/content/bringing-benefits-linux-containers-operational-technology</link>
  <description>  &lt;div data-history-node-id="1340654" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/pavan-singh" lang="" about="https://www.linuxjournal.com/users/pavan-singh" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Pavan Singh&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;Linux container technology was introduced more than a decade ago and has recently jumped in adoption in IT environments. However, the OT (operational technology) environments, typically made up of heterogenous embedded systems, have lagged in the adoption of container technologies, due to both the unique technology requirements and the business models that relied on proprietary systems. In this article, I explore recent innovation in open-source offerings that are enabling the use of containers in OT use cases, such as industrial control systems, IoT gateways, medical devices, Radio Access Network (RAN) products and network appliances.&lt;/p&gt;

&lt;p&gt;Enterprise IT leaders have adopted “cloud-native” computing architectures because of the innovation velocity and cost benefits derived by the approach. To leverage containers, developers segment applications into modular micro-services that enable flexible development and deployment models. These micro-services are then deployed as containers where the service itself is integrated with the required libraries and functions. On containerization, these application components have small footprints and fast speeds of deployment. The applications become highly portable across compute architectures due to the abstraction away from the hardware and the operating system.&lt;/p&gt;

&lt;p&gt;The benefits of flexibility and the modularity offered by container-based architectures are fully realized when leveraged in conjunction with higher-level orchestration systems that can manage the containers throughout their entire lifecycle. Kubernetes, the leading open-source orchestration system for containers, has gained a lot of traction over the last few years. Initially developed by Google, the Kubernetes project is now maintained by the Cloud Native Compute Foundation (CNCF). CNCF is dedicated to reducing the friction around the adoption of cloud-native technologies and brings to bear a few key cloud-native projects, such as Kubernetes, Prometheus and Envoy. This is an example of an open-source organization that has fostered collaboration among the entire value chain – developers, end-users and vendors. Today’s CNCF membership includes significant technology brands, such as Amazon, Cisco, Google, Microsoft, Oracle, SAP and many others.&lt;/p&gt;

&lt;p&gt;Containers and other cloud-native paradigms were initially developed with IT environments in mind. And as these technologies have matured and the capability of the cloud-native technologies increased, the OT decision-makers have taken notice. And as more developers get access to container technology, they are going through a journey of their own, albeit one that is different from the journey of the IT developers over the last decade.&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/bringing-benefits-linux-containers-operational-technology" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 22 May 2019 12:30:00 +0000</pubDate>
    <dc:creator>Pavan Singh</dc:creator>
    <guid isPermaLink="false">1340654 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Kubernetes Identity Management: Authentication</title>
  <link>https://www.linuxjournal.com/content/kubernetes-identity-management-authentication</link>
  <description>  &lt;div data-history-node-id="1340551" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/marc-boorshtein" lang="" about="https://www.linuxjournal.com/users/marc-boorshtein" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Marc Boorshtein&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;You've deployed Kubernetes, but now how are you going to get it into the hands of
your developers and admins securely?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
Kubernetes has taken the world by storm. In just a few years, Kubernetes
(aka k8s) has gone from an interesting project to a driver for technology
and innovation. One of the easiest ways to illustrate this point is
the difference in attendance in the two times KubeCon North America has
been in Seattle. Two years ago, it was in a hotel with less than 20
vendor booths. This year, it was at the Seattle Convention Center with
8,000 attendees and more than 100 vendors!
&lt;/p&gt;

&lt;p&gt;
Just as with any other complex system, k8s has its own security model and
needs to interact with both users and other systems. In this article,
I walk through the various authentication options and
provide examples and implementation advice as to how you should manage
access to your cluster.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
What Does Identity Mean to Kubernetes?&lt;/span&gt;

&lt;p&gt;
The first thing to ask is "what is an identity?" in k8s. K8s is very
different from most other systems and applications. It's a set of APIs.
There's no "web interface" (I discuss the dashboard later in this article).
There's no point to "log in". There is no "session" or "timeout".
Every API request is unique and distinct, and it must contain everything
k8s needs to authenticate and authorize the request.
&lt;/p&gt;

&lt;p&gt;
That said, the main thing to remember about users in k8s is that they don't
exist in any persistent state. You don't connect k8s to an LDAP directory
or Active Directory. Every request must ASSERT an identity to k8s in one
of multiple possible methods. I capitalize ASSERT because it will become
important later. The key is to remember that k8s doesn't authenticate
users; it validates assertions.
&lt;/p&gt;

&lt;p&gt;
&lt;strong&gt;Service Accounts&lt;/strong&gt;
&lt;/p&gt;

&lt;p&gt;
Service accounts are where this rule bends a bit. It's true that k8s
doesn't store information about users. It does store service accounts,
which are not meant to represent people. They're meant to represent
anything that isn't a person. Everything that interacts with something
else in k8s runs as a service account. As an example, if you were to
submit a very basic pod:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo Hello Kubernetes!
     ↪&amp;&amp; sleep 3600']
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
And then look at it in k8s after deployment by running &lt;code&gt;kubectl get pod
myapp-pod -o yaml&lt;/code&gt;:

&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/kubernetes-identity-management-authentication" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 22 Apr 2019 11:30:00 +0000</pubDate>
    <dc:creator>Marc Boorshtein</dc:creator>
    <guid isPermaLink="false">1340551 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Weekend Reading: Containers</title>
  <link>https://www.linuxjournal.com/content/weekend-reading-containers</link>
  <description>  &lt;div data-history-node-id="1340167" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/carlie-fairchild" lang="" about="https://www.linuxjournal.com/users/carlie-fairchild" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Carlie Fairchild&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;The software enabling this technology comes in many forms, with Docker as the most popular. The recent rise in popularity of container technology within the data center is a direct result of its portability and ability to isolate working environments, thus limiting its impact and overall footprint to the underlying computing system. To understand the technology completely, you first need to understand the many pieces that make it all possible. Join us this weekend as we learn about Containers.&lt;/p&gt;

&lt;p&gt;Before we get started, many ask what the difference is between a container and virtual machines? Editor &lt;a href="https://www.linuxjournal.com/users/petros-koutoupis"&gt;Petros Koutoupis&lt;/a&gt; explains: Both have a specific purpose and place with very little overlap, and one doesn't obsolete the other. A container is meant to be a lightweight environment that you spin up to host one to a few isolated applications at bare-metal performance. You should opt for virtual machines when you want to host an entire operating system or ecosystem or maybe to run applications incompatible with the underlying environment.&lt;/p&gt;

&lt;span class="h3-replacement"&gt;&lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process"&gt;Everything You Need to Know about Linux Containers, Part I: Linux Control Groups and Process Isolation&lt;/a&gt;&lt;/span&gt;

&lt;p&gt;Truth be told, certain software applications in the wild may need to be controlled or limited—at least for the sake of stability and, to some degree, security. Far too often, a bug or just bad code can disrupt an entire machine and potentially cripple an entire ecosystem. Fortunately, a way exists to keep those same applications in check. Control groups (cgroups) is a kernel feature that limits, accounts for and isolates the CPU, memory, disk I/O and network's usage of one or more processes.&lt;/p&gt;

&lt;span class="h3-replacement"&gt;&lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc"&gt;Everything You Need to Know about Linux Containers, Part II: Working with Linux Containers (LXC)&lt;/a&gt;&lt;/span&gt;

&lt;p&gt;&lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process"&gt;Part I of this Deep Dive on containers&lt;/a&gt; introduces the idea of kernel control groups, or cgroups, and the way you can isolate, limit and monitor selected userspace applications. Here, I dive a bit deeper and focus on the next step of process isolation—that is, through containers, and more specifically, the Linux Containers (LXC) framework.&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/weekend-reading-containers" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Sat, 09 Feb 2019 12:37:42 +0000</pubDate>
    <dc:creator>Carlie Fairchild</dc:creator>
    <guid isPermaLink="false">1340167 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Sharing Docker Containers across DevOps Environments</title>
  <link>https://www.linuxjournal.com/content/sharing-docker-containers-across-devops-environments</link>
  <description>  &lt;div data-history-node-id="1340036" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/todd-jacobs" lang="" about="https://www.linuxjournal.com/users/todd-jacobs" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Todd A. Jacobs&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Docker provides a powerful tool for creating lightweight images and
containerized processes, but did you know it can make your development
environment part of the DevOps pipeline too? Whether you're managing
tens of thousands of servers in the cloud or are a software engineer looking
to incorporate Docker containers into the software development life
cycle, this article has a little something for everyone with a passion
for Linux and Docker.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
In this article, I describe how Docker containers flow
through the DevOps pipeline. I also cover some advanced DevOps
concepts (borrowed from object-oriented programming) on how to use
dependency injection and encapsulation to improve the DevOps process.
And finally, I show how containerization can be useful for the
development and testing process itself, rather than just as a
place to serve up an application after it's written.
&lt;/p&gt;


&lt;span class="h3-replacement"&gt;
Introduction&lt;/span&gt;

&lt;p&gt;
Containers are hot in DevOps shops, and their benefits from an
operations and service delivery point of view have been covered well
elsewhere. If you want to build a Docker container or deploy a Docker
host, container or swarm, a lot of information is available.
However, very few articles talk about how to &lt;em&gt;develop&lt;/em&gt; inside the Docker
containers that will be reused later in the DevOps pipeline, so that's what
I focus on here.
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/styles/max_650x650/public/u%5Buid%5D/12282f1%281%29.png" width="650" height="130" alt="""" class="image-max_650x650" /&gt;&lt;p&gt;&lt;em&gt;Figure 1.
Stages a Docker Container Moves Through in a Typical DevOps
Pipeline&lt;/em&gt;&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Container-Based Development Workflows&lt;/span&gt;

&lt;p&gt;
Two common workflows exist for developing software for use inside Docker
containers:
&lt;/p&gt;

&lt;ol&gt;&lt;li&gt;
Injecting development tools into an existing Docker container:
this is the best option for sharing a consistent development environment
with the same toolchain among multiple developers, and it can be used in
conjunction with web-based development environments, such as Red Hat's
codenvy.com or dockerized IDEs like Eclipse Che.
&lt;/li&gt;

&lt;li&gt;
Bind-mounting a host directory onto the Docker container and using your
existing development tools on the host:
this is the simplest option, and it offers flexibility for developers
to work with their own set of locally installed development tools.
&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;
Both workflows have advantages, but local mounting is inherently simpler. For
that reason, I focus on the mounting solution as "the simplest
thing that could possibly work" here.
&lt;/p&gt;

&lt;p&gt;
&lt;strong&gt;How Docker Containers Move between Environments&lt;/strong&gt;
&lt;/p&gt;

&lt;p&gt;
A core tenet of DevOps is that the source code and runtimes that will be used
in production are the same as those used in development. In other words, the
most effective pipeline is one where the identical Docker image can be reused
for each stage of the pipeline.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/sharing-docker-containers-across-devops-environments" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 18 Dec 2018 13:00:00 +0000</pubDate>
    <dc:creator>Todd A. Jacobs</dc:creator>
    <guid isPermaLink="false">1340036 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Everything You Need to Know about Containers, Part III: Orchestration with Kubernetes</title>
  <link>https://www.linuxjournal.com/content/everything-you-need-know-about-containers-part-iii-orchestration-kubernetes</link>
  <description>  &lt;div data-history-node-id="1339997" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;A look at using Kubernetes to create, deploy and manage thousands of
container images.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
If you've read the first two articles in this series, you now should be familiar with &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process"&gt;Linux kernel control groups (Part I)&lt;/a&gt;,
&lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc"&gt;Linux Containers and Docker (Part II)&lt;/a&gt;. But, here's a quick recap: once upon a time, data-center
administrators deployed entire operating systems, occupying entire hardware
servers to host a few applications each. This was a lot of overhead with a
lot to manage. Now scale that across multiple server hosts, and it increasingly
became more difficult to maintain. This was a problem—a problem that
wasn't
easily solved. It would take time for technological evolution to reach
the moment where you are able to shrink the operating system and launch
these varied applications as microservices hosted across multiple containers
on the same physical machine.
&lt;/p&gt;

&lt;p&gt;
In the final part of this series, I explore the method
most people use to create, deploy and manage containers. The concept is typically
referred to as container orchestration. If I were to focus on Docker, on its
own, the technology is extremely simple to use, and running a few images
simultaneously is also just as easy. Now, scale that out to hundreds, if not
thousands, of images. How do you manage that? Eventually, you need to step
back and rely on one of the few orchestration frameworks specifically
designed to handle this problem. Enter Kubernetes.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Kubernetes&lt;/span&gt;

&lt;p&gt;
Kubernetes, or k8s (k + eight characters), originally was developed by
Google. It's an open-source platform aiming to automate container operations:
"deployment, scaling and operations of application containers across
clusters of hosts". Google was an early adopter and contributor to the
Linux Container technology (in fact, Linux Containers power
Google's very own cloud services). Kubernetes eliminates all of the
manual processes involved in the deployment and scaling of containerized
applications. It's capable of clustering together groups of servers hosting
Linux Containers while also allowing administrators to manage those
clusters easily and efficiently.
&lt;/p&gt;

&lt;p&gt;
Kubernetes makes it possible to respond to consumer demands quickly by
deploying your applications within a timely manner, scaling those same
applications with ease and seamlessly rolling out new features, all while
limiting hardware resource consumption. It's extremely modular and can
be hooked into by other applications or frameworks easily. It also provides
additional self-healing services, including auto-placement,
auto-replication and auto-restart of containers.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-containers-part-iii-orchestration-kubernetes" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 28 Nov 2018 12:30:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1339997 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>FOSS Project Spotlight: BlueK8s</title>
  <link>https://www.linuxjournal.com/content/foss-project-spotlight-bluek8s</link>
  <description>  &lt;div data-history-node-id="1340190" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/tom-phelan" lang="" about="https://www.linuxjournal.com/users/tom-phelan" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Tom Phelan&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Deploying and managing complex stateful applications on Kubernetes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
&lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; (aka K8s) is now the de facto container orchestration
framework. Like other popular open-source technologies, Kubernetes has
amassed a considerable ecosystem of complementary tools to address
everything from storage to security. And although it was first created for
running &lt;a href="https://whatis.techtarget.com/definition/stateless-app"&gt;stateless
applications&lt;/a&gt;, more and more organizations are
interested in using Kubernetes for &lt;a href="https://whatis.techtarget.com/definition/stateful-app"&gt;stateful
applications&lt;/a&gt;.
&lt;/p&gt;

&lt;p&gt;
However, while Kubernetes has advanced significantly in many areas during the
past couple years, there still are considerable gaps when it comes to
running complex stateful applications. It remains challenging to deploy
and manage distributed stateful applications consisting of a multitude of
co-operating services (such as for use cases with large-scale analytics and
machine learning) with Kubernetes.
&lt;/p&gt;

&lt;p&gt;
I've been focused on this space for the past several years as a
co-founder of &lt;a href="https://www.bluedata.com"&gt;BlueData&lt;/a&gt;. During that
time, I've worked with many teams
at Global 2000 enterprises in several industries to deploy
distributed stateful services successfully, such as Hadoop, Spark, Kafka, Cassandra,
TensorFlow and other analytics, data science, machine learning (ML) and
deep learning (DL) tools in containerized environments.
&lt;/p&gt;

&lt;p&gt;
In that time, I've learned what it takes to deploy complex stateful
applications like these with containers while ensuring enterprise-grade
security, reliability and performance. Together with my colleagues at
BlueData, we've broken new ground in using Docker containers for big
data analytics, data science and ML/DL in highly distributed
environments. We've developed new innovations to address
requirements in areas like storage, security, networking, performance and
lifecycle management.
&lt;/p&gt;

&lt;p&gt;
Now we want to bring those innovations to the Open Source community—to ensure that these stateful services are supported in the Kubernetes
ecosystem. BlueData's engineering team has been busy working with
Kubernetes, &lt;a href="https://www.bluedata.com/blog/2017/12/big-data-container-orchestration-kubernetes-k8s"&gt;developing
prototypes&lt;/a&gt; with Kubernetes in our labs and
collaborating with multiple enterprise organizations to evaluate the
opportunities (and challenges) in using Kubernetes for complex stateful
applications.
&lt;/p&gt;

&lt;p&gt;
To that end, we recently &lt;a href="https://www.bluedata.com/article/bluek8s-and-kubernetes-director-for-stateful-applications"&gt;introduced&lt;/a&gt;
a new Kubernetes open-source
initiative: BlueK8s. The BlueK8s initiative will be composed of several
open-source projects that each will bring enterprise-level capabilities for
stateful applications to Kubernetes.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/foss-project-spotlight-bluek8s" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 16 Nov 2018 13:00:00 +0000</pubDate>
    <dc:creator>Tom Phelan</dc:creator>
    <guid isPermaLink="false">1340190 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Everything You Need to Know about Linux Containers, Part II: Working with Linux Containers (LXC)</title>
  <link>https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc</link>
  <description>  &lt;div data-history-node-id="1339992" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;&lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process"&gt;Part I of this Deep Dive on containers&lt;/a&gt; introduces
the idea of kernel control groups, or cgroups, and the way you can isolate,
limit and monitor selected userspace applications. Here,
I dive a bit deeper and focus on the next step of process
isolation—that is, through containers, and more specifically, the Linux
Containers (LXC) framework.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
Containers are about as close to bare metal as you can get when running
virtual machines. They impose very little to no overhead when hosting virtual
instances. First introduced in 2008, LXC adopted much of its functionality
from the Solaris Containers (or Solaris Zones) and FreeBSD jails that
preceded it. Instead of creating a full-fledged virtual machine, LXC enables
a virtual environment with its own process and network space. Using
namespaces to enforce process isolation and leveraging the kernel's very own
control groups (cgroups) functionality, the feature limits, accounts for and isolates CPU, memory, disk I/O and network usage of one or more
processes. Think of this userspace framework as a very advanced form of
&lt;code&gt;chroot&lt;/code&gt;.
&lt;/p&gt;

&lt;p&gt;
Note: LXC uses namespaces to enforce process isolation, alongside the kernel's very
own cgroups to account for and limit CPU, memory, disk I/O and network usage
across one or more processes.
&lt;/p&gt;

&lt;p&gt;
But what exactly are containers? The short answer is that containers decouple software
applications from the operating system, giving users a clean and minimal
Linux environment while running everything else in one or more isolated
"containers". The purpose of a container is to launch a limited set
of applications or services (often referred to as microservices) and have
them run within a self-contained sandboxed environment.
&lt;/p&gt;

&lt;p&gt;
Note: the purpose of a container is to launch a limited set of applications or
services and have them run within a self-contained sandboxed environment.
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/styles/max_1300x1300/public/u%5Buid%5D/ContainerModel.png" width="557" height="250" alt="""" class="image-max_1300x1300" /&gt;&lt;p&gt;
&lt;em&gt;Figure 1. A Comparison of
Applications Running in a Traditional Environment to Containers&lt;/em&gt;
&lt;/p&gt;

&lt;p&gt;
This isolation prevents processes running within a given container from
monitoring or affecting processes running in another container. Also, these
containerized services do not influence or disturb the host machine. The idea
of being able to consolidate many services scattered across multiple physical
servers into one is one of the many reasons data centers have chosen to adopt
the technology.
&lt;/p&gt;

&lt;p&gt;
Container features include the following:
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 27 Aug 2018 11:30:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1339992 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Everything You Need to Know about Linux Containers, Part I: Linux Control Groups and Process Isolation</title>
  <link>https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process</link>
  <description>  &lt;div data-history-node-id="1339985" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;
Everyone's heard the term, but what exactly are containers?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
The software enabling this
technology comes in many forms, with Docker as the most popular. The
recent rise in popularity of container technology within the data center is a direct result of its
portability and ability to isolate working environments, thus limiting
its impact and overall footprint to the underlying computing system.
To understand the technology completely, you first
need to understand the many pieces that make it all
possible.
&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Sidenote: people often ask about the difference between containers and virtual machines. Both have a specific purpose and place with very little overlap, and one doesn't obsolete the other. A container is meant to be a lightweight environment that you spin up to host one to a few isolated applications at bare-metal performance. You should opt for virtual machines when you want to host an entire operating system or ecosystem or maybe to run applications incompatible with the underlying environment.&lt;/em&gt;
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Linux Control Groups&lt;/span&gt;

&lt;p&gt;
Truth be told, certain software applications in the wild
may need to be controlled or limited—at least for the sake of stability
and, to some degree, security. Far too often, a bug or just bad code can disrupt
an entire machine and potentially cripple an entire ecosystem. Fortunately,
a way exists to keep those same applications in check. Control groups
(cgroups) is a kernel feature that limits, accounts for and isolates the CPU,
memory, disk I/O and network's usage of one or more processes.
&lt;/p&gt;

&lt;p&gt;
Originally developed by Google, the cgroups technology eventually would find
its way to the Linux kernel mainline in version 2.6.24 (January 2008). A
redesign of this technology—that is, the addition of kernfs (to split some
of the sysfs logic)—would be merged into both the 3.15 and 3.16 kernels.
&lt;/p&gt;

&lt;p&gt;
The primary design goal for cgroups was to provide a unified interface to
manage processes or whole operating-system-level virtualization, including
Linux Containers, or LXC (a topic I plan to revisit in more detail in a
follow-up article). The cgroups framework provides the following:
&lt;/p&gt;

&lt;ul&gt;&lt;li&gt;
&lt;strong&gt;Resource limiting:&lt;/strong&gt;
a group can be configured not to exceed a specified memory limit or use more
than the desired amount of processors or be limited to specific peripheral
devices.
&lt;/li&gt;

&lt;li&gt;
&lt;strong&gt;Prioritization:&lt;/strong&gt;
one or more groups may be configured to utilize fewer or more CPUs or disk
I/O throughput.
&lt;/li&gt;

&lt;li&gt;
&lt;strong&gt;Accounting:&lt;/strong&gt;
a group's resource usage is monitored and measured.
&lt;/li&gt;

&lt;li&gt;
&lt;strong&gt;Control:&lt;/strong&gt;
groups of processes can be frozen or stopped and restarted.
&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;
A cgroup can consist of one or more processes that are all bound to the same
set of limits. These groups also can be hierarchical, which means that a
subgroup inherits the limits administered to its parent group.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 21 Aug 2018 12:15:15 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1339985 at https://www.linuxjournal.com</guid>
    </item>

  </channel>
</rss>
