<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" version="2.0" xml:base="https://www.linuxjournal.com/">
  <channel>
    <title>Storage</title>
    <link>https://www.linuxjournal.com/</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>Data in a Flash, Part IV: the Future of Memory Technologies</title>
  <link>https://www.linuxjournal.com/content/data-flash-part-iv-future-memory-technologies</link>
  <description>  &lt;div data-history-node-id="1340747" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
I have spent the first three parts of this series describing the
evolution and current state of Flash storage. I also described how to configure an NVMe
over Fabric (NVMeoF) storage network to export NVMe volumes across RDMA
over Converged Ethernet (RoCE) and again over native TCP. [See Petros' &lt;a href="https://www.linuxjournal.com/content/data-flash-part-i-evolution-disk-storage-and-introduction-nvme"&gt;"Data
in a Flash, Part I: the Evolution of Disk Storage and an Introduction to
NVMe"&lt;/a&gt;, &lt;a href="https://www.linuxjournal.com/content/data-flash-part-ii-using-nvme-drives-and-creating-nvme-over-fabrics-network"&gt;"Data
in a Flash, Part II: Using NVMe Drives and Creating an NVMe over Fabrics
Network"&lt;/a&gt; and &lt;a href="https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp"&gt;"Data
in a Flash, Part III: NVMe over Fabrics Using TCP"&lt;/a&gt;.]
&lt;/p&gt;

&lt;p&gt;
But what does
the future of memory technologies look like? With traditional Flash
technologies that are enabled via NVMe, you should continue to expect
higher capacities. For instance, what comes after QLC or Quad-Level Cells
NAND technology? Only time will tell. The next-generation NVMe
specification will introduce a protocol standard operating across more PCI
Express lanes and at a higher bandwidth. As memory technologies continue to
evolve, the method in which you plug that technology into your computers will
evolve with it.
&lt;/p&gt;

&lt;p&gt;
Remember, the ultimate goal is to move closer to the CPU and reduce access
times (that is, latencies).
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/u%5Buid%5D/Data%20Performance%20Gap.png" width="717" height="237" alt="""" /&gt;&lt;p&gt;
&lt;em&gt;Figure 1. The Data Performance Gap as You Move Further Away from the
CPU&lt;/em&gt;&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Storage Class Memory&lt;/span&gt;

&lt;p&gt;
For years, vendors have been developing a technology in which you are able
to plug persistent memory into traditional DIMM slots. Yes, these are the
very same slots that volatile DRAM also uses. Storage Class Memory (SCM)
is a newer hybrid storage tier. It's not exactly memory, and it's also not
exactly storage. It lives closer to the CPU and comes in two forms: 1)
traditional DRAM backed by a large capacitor to preserve data to a local
NAND chip (for example, NVDIMM-N) and 2) a complete NAND module (NVDIMM-F). In the
first case, you retain DRAM speeds, but you don't get the capacity.
Typically, a
DRAM-based NVDIMM is behind the latest traditional DRAM sizes. Vendors such
as Viking Technology and Netlist are the main producers of DRAM-based
NVDIMM products.
&lt;/p&gt;

&lt;p&gt;
The second, however, will give you the larger capacity sizes, but it's
not nearly as fast as DRAM speeds. Here, you will find your standard
NAND—the
very same as found in modern Solid State Drives (SSDs) fixed onto your
traditional DIMM modules.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/data-flash-part-iv-future-memory-technologies" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 19 Jul 2019 11:30:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1340747 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Data in a Flash, Part III: NVMe over Fabrics Using TCP</title>
  <link>https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp</link>
  <description>  &lt;div data-history-node-id="1340651" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;A remote NVMe block device exported via an NVMe over
Fabrics network using TCP.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
Version 5.0 of the Linux kernel brought with it many wonderful features,
one of which was the introduction of NVMe over Fabrics (NVMeoF) across
native TCP. If you recall, in the previous part to this series (&lt;a href="https://www.linuxjournal.com/content/data-flash-part-ii-using-nvme-drives-and-creating-nvme-over-fabrics-network"&gt;"Data
in a Flash, Part II: Using NVMe Drives and Creating an NVMe over Fabrics
Network"&lt;/a&gt;, I explained how to enable
your NVMe network across RDMA (an Infiniband protocol) through a little
method referred to as RDMA over Converged Ethernet (RoCE). As the name
implies, it allows for the transfer of RDMA across a traditional Ethernet
network. And although this works well, it introduces a bit of overhead
(along with latencies). So when the 5.0 kernel introduced native TCP
support for NVMe targets, it simplifies the method or procedure one
needs to take to configure the same network, as shown in my last article, and
it also makes accessing the remote NVMe drive faster.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Software Requirements&lt;/span&gt;

&lt;p&gt;
To continue with this tutorial, you'll need to have a 5.0
Linux kernel or later installed, with the following modules built and
inserted into the operating systems of both your initiator (the server
importing the remote NVMe volume) and the target (the server exporting
its local NVMe volume):

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
# NVME Support
CONFIG_NVME_CORE=y
CONFIG_BLK_DEV_NVME=y
# CONFIG_NVME_MULTIPATH is not set
CONFIG_NVME_FABRICS=m
CONFIG_NVME_RDMA=m
# CONFIG_NVME_FC is not set
CONFIG_NVME_TCP=m
CONFIG_NVME_TARGET=m
CONFIG_NVME_TARGET_LOOP=m
CONFIG_NVME_TARGET_RDMA=m
# CONFIG_NVME_TARGET_FC is not set
CONFIG_NVME_TARGET_TCP=m
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
More specifically, you need the module to import the remote NVMe volume:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
CONFIG_NVME_TCP=m
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
And the module to export a local NVMe volume:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
CONFIG_NVME_TARGET_TCP=m
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
Before continuing, make sure your physical (or virtual)
machine is up to date. And once you verify that to be the case,
make sure you are able to see all locally connected NVMe devices
(which you'll export across your network):

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
$ cat /proc/partitions |grep -e nvme -e major
major minor  #blocks  name
 259        0 3907018584 nvme2n1
 259        1 3907018584 nvme3n1
 259        2 3907018584 nvme0n1
 259        3 3907018584 nvme1n1
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
If you don't see any connected NVMe devices, make sure the kernel
module is loaded:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
petros@ubu-nvme1:~$ lsmod|grep nvme
nvme                   32768  0
nvme_core              61440  1 nvme
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
The following modules need to be loaded on the initiator:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
$ sudo modprobe nvme
$ sudo modprobe nvme-tcp
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
And, the following modules need to be loaded on the target:

&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 10 Jun 2019 11:00:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1340651 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>FOSS Project Spotlight: Bareos, a Cross-Network, Open-Source Backup Solution</title>
  <link>https://www.linuxjournal.com/content/foss-project-spotlight-bareos-cross-network-open-source-backup-solution</link>
  <description>  &lt;div data-history-node-id="1340600" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/heike-jurzik-and-maik-aussendorf" lang="" about="https://www.linuxjournal.com/users/heike-jurzik-and-maik-aussendorf" typeof="schema:Person" property="schema:name" datatype="" content="Heike Jurzik and Maik Aussendorf" xml:lang=""&gt;Heike Jurzik a…&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
&lt;a href="https://www.bareos.org"&gt;Bareos&lt;/a&gt; (Backup Archiving Recovery Open
Sourced) is a cross-network, open-source
backup solution that preserves, archives and recovers data from all major
operating systems. The Bareos project started 2010 as a Bacula fork and is now
being developed under the AGPLv3 license.
&lt;/p&gt;

&lt;p&gt;
The client/server-based backup solution is actually a set of computer programs
(Figure 1) that communicate over the network: the Bareos Director (BD), one or
more Storage Dæmons (SD) and the File Dæmons (FD). Due to this modular
design, Bareos is scalable—from single computer systems (where all
components run on one machine) to large infrastructures with hundreds of
computers (even in different geographies).
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/u%5Buid%5D/12764f1.png" width="1000" height="1124" alt="""" /&gt;&lt;p&gt;
&lt;em&gt;Figure 1. A Typical Bareos Setup: Director (with Database), File Dæmon(s),
Storage Dæmon(s) and Backup Media&lt;/em&gt;
&lt;/p&gt;

&lt;p&gt;
The director is the central control unit for all other dæmons. It manages the
database (catalog), the connected clients, the file sets (they define which
data Bareos should back up), the configuration of optional plugins, before and
after jobs (programs to be executed before or after a backup job), the storage
and media pool, schedules and the backup jobs. Bareos Director runs as a
dæmon.
&lt;/p&gt;

&lt;p&gt;
The catalog maintains a record of all backup jobs, saved files and volumes
used. Current Bareos versions support PostgreSQL, MySQL and SQLite, with
PostgreSQL being the preferred database back end.
&lt;/p&gt;

&lt;p&gt;
The File Dæmon (FD) must be installed on every client machine. It is
responsible for the backup as well as the restore process. The FD receives the
director's instructions, executes them and transmits the data to the Bareos
Storage Dæmon. Bareos offers pre-packed file dæmons for many popular
operating systems, such as Linux, FreeBSD, AIX, HP-UX, Solaris, Windows and macOS.
Like the director, the FD runs as a dæmon in the background.
&lt;/p&gt;

&lt;p&gt;
The Storage Dæmon (SD) receives data from one or more File Dæmons (at the
director's request). It stores the data (together with the file attributes) on
the configured backup medium. Bareos supports various types of backup media, as
shown in Figure 1, including disks, tape drives and even cloud storage
solutions. During the restore process, the SD is responsible for sending the
correct data back to the FD(s). The Storage Dæmon runs as a dæmon on the
machine handling the backup device(s).
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/foss-project-spotlight-bareos-cross-network-open-source-backup-solution" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 17 May 2019 12:00:00 +0000</pubDate>
    <dc:creator>Heike Jurzik and Maik Aussendorf</dc:creator>
    <guid isPermaLink="false">1340600 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>The Ceph Foundation and Building a Community: an Interview with SUSE</title>
  <link>https://www.linuxjournal.com/content/ceph-foundation-and-building-community-interview-suse</link>
  <description>  &lt;div data-history-node-id="1340374" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
On November 12 at the OpenStack Summit in Berlin, Germany, the Linux foundation
formally announced the Ceph Foundation. Present at this same summit were key
individuals from SUSE and the SUSE Enterprise Storage team. For those less
familiar with the SUSE Enterprise Storage product line, it is entirely powered
by Ceph technology.
&lt;/p&gt;

&lt;p&gt;
With Ceph, data is treated and stored like objects. This is unlike traditional
(and legacy) data storage solutions, where data is written to and read from
the storage volumes via sectors and at sector offsets (often referred to as
blocks). When dealing with large amounts of large data, treating them as
objects is the way to do it. It's also much easier to manage. In fact, this
is how the cloud functions—with objects. This object-drive model enables
Ceph for simplified scalability to meet consumer demand easily. These objects
are replicated across an entire cluster of nodes, giving Ceph its
fault-tolerance and further reducing single points of failure. The parent
company of the project and its technology was acquired by Red Hat, Inc., in
April 2014.
&lt;/p&gt;

&lt;p&gt;
I was fortunate in that I was able to connect with a few key
SUSE representatives for a quick Q &amp; A, as it relates to this recent
announcement. I spoke with Lars Marowsky-Brée, SUSE Distinguished
Engineer and member of the governing board of the Ceph Foundation; Larry
Morris, Senior Product Manager for SUSE Enterprise Storage; Sanjeet Singh,
Solutions Owner for SUSE Enterprise Storage; and Michael Dilio, Product and
Solutions Marketing Manager for SUSE Enterprise Storage.
&lt;/p&gt;


&lt;p&gt;
&lt;strong&gt;Petros Koutoupis:&lt;/strong&gt; How has IBM's recent Red Hat, Inc., acquisition
announcement affected the Ceph project, and do you believe this is what led to
the creation of the Ceph Foundation?
&lt;/p&gt;

&lt;p&gt;
&lt;strong&gt;SUSE:&lt;/strong&gt; With Ceph being an Open Source community project, there is
no anticipated effect on the Ceph project as a result of the pending IBM
acquisition of Red Hat. Discussions and planning of the Ceph foundation have
been going on for some time and were not a
result of the acquisition announcement.
&lt;/p&gt;

&lt;p&gt;
&lt;strong&gt;PK:&lt;/strong&gt; For some time, SUSE has been fully committed to the
Ceph project and has even leveraged the same technology in its SUSE
Enterprise Storage offering. Will these recent announcements impact both the
offering and the customers using it?
&lt;/p&gt;

&lt;p&gt;
&lt;strong&gt;SUSE:&lt;/strong&gt; The Ceph Foundation news is a validation of the vibrancy
of the Ceph community. There are 13 premier members, with SUSE being a
founding and premier member.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/ceph-foundation-and-building-community-interview-suse" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 28 Dec 2018 13:00:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1340374 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Papa's Got a Brand New NAS: the Software</title>
  <link>https://www.linuxjournal.com/content/papas-got-brand-new-nas-software</link>
  <description>  &lt;div data-history-node-id="1340119" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/kyle-rankin" lang="" about="https://www.linuxjournal.com/users/kyle-rankin" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Kyle Rankin&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Who needs a custom NAS OS or a web-based GUI when command-line
NAS software is so easy to configure?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
In a recent letter to the editor, I was contacted by a reader who
enjoyed my &lt;a href="https://www.linuxjournal.com/content/papas-got-brand-new-nas"&gt;"Papa's
Got a Brand New NAS"&lt;/a&gt; article, but wished I had
spent more time describing the software I used. When I
wrote the article, I decided not to dive into the software too much,
because it all was pretty standard for serving files under Linux.
But on second thought, if you want to re-create what I made, I
imagine it would be nice to know the software side as well, so this article
describes the software I use in my home NAS.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
The OS&lt;/span&gt;

&lt;p&gt;
My NAS uses the &lt;a href="https://www.hardkernel.com/main/products/prdt_info.php"&gt;ODROID-XU4&lt;/a&gt; as the main computing platform, and so
far, I've found its octo-core ARM CPU and the rest of its resources
to be adequate for a home NAS. When I first set it up, I visited the
&lt;a href="https://wiki.odroid.com/odroid-xu4/odroid-xu4"&gt;official wiki
page&lt;/a&gt; for the computer, which provides a number of OS
images, including Ubuntu and Android images that you can copy onto a
microSD card. Those images are geared more toward desktop use,
however, and I wanted a minimal server image. After some searching,
I found a &lt;a href="https://forum.odroid.com/viewtopic.php?f=96&amp;t=17542"&gt;minimal image for what was the current Debian stable
release at the time (Jessie)&lt;/a&gt;.
&lt;/p&gt;


&lt;p&gt;
Although this minimal image worked okay for me, I don't necessarily
recommend just going with whatever OS some volunteer on a forum
creates. Since I first set up the computer, the Armbian project has
been released, and it supports a number of standardized OS images for quite
a few ARM platforms including the ODROID-XU4. So if you
want to follow in my footsteps, you may want to start with the &lt;a href="https://www.armbian.com/odroid-xu4"&gt;minimal Armbian
Debian image&lt;/a&gt;.
&lt;/p&gt;

&lt;p&gt;
If you've ever used a Raspberry Pi before, the process of setting
up an alternative ARM board shouldn't be too different. Use another
computer to write an OS image to a microSD card, boot the ARM board,
and at boot, the image will expand to fill the existing filesystem.
Then reboot and connect to the network, so you can log in with the default
credentials your particular image sets up. Like with Raspbian builds,
the first step you should perform with Armbian or any other OS image
is to change the default password to something else. Even better,
you should consider setting up proper user accounts instead of
relying on the default.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/papas-got-brand-new-nas-software" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 29 Oct 2018 12:00:00 +0000</pubDate>
    <dc:creator>Kyle Rankin</dc:creator>
    <guid isPermaLink="false">1340119 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>NETGEAR 48-Port Gigabit Smart Managed Plus Switch (GS750E)</title>
  <link>https://www.linuxjournal.com/content/netgear-48-port-gigabit-smart-managed-plus-switch-gs750e</link>
  <description>  &lt;div data-history-node-id="1339544" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
More than ever, small to mid-sized businesses demand and rely on
their networks to carry out mission-critical business activities. As
always, however, budgets and expertise constrain these companies from
using complex managed switches to run their networks. Extending a hand
to assist is &lt;a href="https://www.netgear.com"&gt;NETGEAR, Inc.&lt;/a&gt;, whose new NETGEAR 48-port Gigabit Smart
Managed Plus Switch (GS750E) provides an easy, reliable and affordable
connectivity solution for expanding networks for workstations/servers,
Network Attached Storage (NAS) and PCs. 
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12251f2.jpg" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;p&gt;
NETGEAR's
"industry-first"
GS750E 48-port switch is designed to meet current and future needs of any
IP network, enabling network optimization and eliminating bottlenecks
and featuring a leading speed/affordability ratio. The device, with
its convenient web-based management, further helps companies in need of
network intelligence to separate and prioritize voice and video traffic
from data to support applications, such as VoIP phones and IP cameras, on
its Ethernet infrastructure. The fanless GS750E supports VLAN, QoS, LAG
and IGMP management capabilities and includes a full set of configurable
advanced L2 features, such as traffic prioritization and link aggregation.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/netgear-48-port-gigabit-smart-managed-plus-switch-gs750e" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 10 Nov 2017 16:12:57 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339544 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>SUSE Software-Defined Storage Leverages Open Source to Break Proprietary Lock-in and Reduce Cost</title>
  <link>https://www.linuxjournal.com/content/suse-software-defined-storage-leverages-open-source-break-proprietary-lock-and-reduce-cost</link>
  <description>  &lt;div data-history-node-id="1339521" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/john-grogan" lang="" about="https://www.linuxjournal.com/users/john-grogan" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;John Grogan&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
Gartner analysts noted in a recent Cool Vendor report:
&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;
It has become painfully
evident that storage capacity demands, and expectations for far more rapid
provisioning of that storage, have far outpaced the ability of [infrastructure and
operations] teams' capabilities. Far-more-automated systems are required to restore a
sense of balance, that is, storage solutions that offer much greater scale, but also
much more automation.
&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;
The power of storage solutions has always resided in the software.  SUSE
software-defined storage gives one more flexibility and choice than traditional
storage appliances provide. It allows users to meet constantly, even exponentially
growing storage needs more securely and cost effectively using industry-standard
hardware and open-source-based software-defined storage solutions. Accordingly, SUSE
has introduced SUSE Enterprise Storage 5 with enhanced ease of management, improved
performance and expanded features, including new disk-to-disk backup capabilities for
enterprise customers, fulfilling the need for "much greater scale, but also much
more automation" as cited by Gartner.
&lt;/p&gt;

&lt;p&gt;
"Every generation of enterprise infrastructure innovation is now being built on open
source", said Gerald Pfeifer, Vice President of Products and Technology Programs at
SUSE. He continued:
&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;
SUSE is expert at both contributing to and using upstream innovation to create
enterprise-grade, secure solutions that can be combined with other technologies to
best address customer needs. This approach applied to software-defined storage
delivers highly scalable solutions that radically reduce storage costs in terms of
both capital and operations expense.
&lt;/p&gt;&lt;/blockquote&gt;

&lt;span class="h3-replacement"&gt;
SUSE Enterprise Storage 5&lt;/span&gt;

&lt;p&gt;
The latest release of SUSE's intelligent
software-defined storage management solution, SUSE Enterprise Storage 5, will enable
IT organizations to accelerate innovation and reduce costs by efficiently
transforming their enterprise storage infrastructures. It is based on the Luminous
release of the Ceph open-source project, and it is ideally suited for compliance,
archive, backup and large data storage. Large data applications include video
surveillance, CCTV, online presence and training, streaming media, X-rays, seismic
processing, genomic mapping and computer-assisted design. Backup and archive
applications include Veritas NetBackup, Commvault and Micro Focus Data Protector,
along with compliance solutions such as iTernity.
&lt;/p&gt;

&lt;p&gt;
SUSE Enterprise Storage 5 is the first commercial offering to support the new
BlueStore back end within Ceph. This follows SUSE's first-to-market support for iSCSI
and CephFS in previous versions of SUSE Enterprise Storage. Notable benefits of this
release include:
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/suse-software-defined-storage-leverages-open-source-break-proprietary-lock-and-reduce-cost" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Thu, 12 Oct 2017 11:57:20 +0000</pubDate>
    <dc:creator>John Grogan</dc:creator>
    <guid isPermaLink="false">1339521 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>iStorage diskAshur Storage Drives</title>
  <link>https://www.linuxjournal.com/content/istorage-diskashur-storage-drives</link>
  <description>  &lt;div data-history-node-id="1339511" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
With software-free setup and operation, the new &lt;a href="https://istorage-uk.com"&gt;iStorage&lt;/a&gt; diskAshur group of
ultra-secure storage drives works across all operating systems, including Linux,
macOS, Android, Chrome, thin and zero clients, MS Windows and embedded systems.
&lt;/p&gt;

&lt;p&gt;
Available in HDD and SDD versions, these high-speed USB 3.1, PIN-authenticated,
hardware-encrypted portable data storage drives feature iStorage's unique EDGE
technology. iStorage calls the EDGE technology—short for Enhanced Dual
Generating Encryption—super-spy-like, due to the advanced security
features that make diskAshur the "most secure data storage drives available on
the market". For one thing, without the PIN, there's no way in!
&lt;/p&gt;

&lt;p&gt;
diskAshur's dedicated, hardware-based secure microprocessor (Common Criteria
EAL4+-ready) employs built-in physical protection mechanisms designed to defend
against external tamper, bypass laser attacks and fault injections. The drives
feature technology that encrypts both the data and the encryption key, ensuring
that private information is secure and protected. Other security features include
a brute-force hack defense mechanism, self-destruct feature, unattended auto-lock
and a wear-resistant epoxy-coated keypad. 
&lt;/p&gt;

&lt;p&gt;
The diskAshur drives are elegantly
designed and available in four striking colors and in capacity options from 128GB
to 5TB.
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12237f1.jpg" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/istorage-diskashur-storage-drives" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 06 Oct 2017 16:50:38 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339511 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>JMR SiloStor NVMe SSD Drives</title>
  <link>https://www.linuxjournal.com/content/jmr-silostor-nvme-ssd-drives</link>
  <description>  &lt;div data-history-node-id="1339460" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
Compute-intensive workflows are the environments in which the newly developed &lt;a href="http://jmr.com"&gt;JMR&lt;/a&gt;
SiloStor NVMe family of SSD drives is designed to show its colors. Ideal for HPC,
data centers, genome research, content creation, CGI/animation, codec processing
and gaming, among others, the SiloStor drive family comes in three NVMe/PCIe
configurations: single-drive module, x4 PCIe connectivity in 512GB/1TB/2TB
capacities; dual-drive, x8 connectivity in 1TB/2TB/4TB capacities; and quad-drive
module, x8 connectivity, available in 2TB/4TB/8TB capacities. The dual- and
quad-drive cards incorporate a PCIe switch, and the drives can be striped (on a
single card) for additional performance. 
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12217f1.jpg" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;p&gt;
All SiloStor designs incorporate active
heatsink coolers on the drive modules themselves, maintaining low operating
temperatures even during intensive sequential write operations. Key performance
metrics include &lt;1 mS average access time of &lt;1 mS, 2 million hours MTBF, 1,200
TBW minimum endurance, 90,000/70,000 IOPS random 4K read/write speed and
4,000/3,000 MB/sequential read/write speed.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/jmr-silostor-nvme-ssd-drives" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 09 Aug 2017 15:50:57 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339460 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Mastering ATA over Ethernet</title>
  <link>https://www.linuxjournal.com/content/mastering-ata-over-ethernet</link>
  <description>  &lt;div data-history-node-id="1339401" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
At one point in time, when you wanted to attach an external block storage device to a
server, you mapped it as a Logical Unit (LU) across a Storage Area Network (SAN). In
the early days, you would do this over the Fibre Channel (FC) protocol. More recently,
iSCSI (SCSI over IP) has usurped FC in most data centers. Although, they're
stable, feature-rich and fully functional, these protocols are built on top of
multiple layers and are extremely complex. It requires a certain level of expertise to
truly master these technologies. That is why the Brantley Coile Company wrote the ATA
over Ethernet (AoE) specification. The standard has been published for a bit longer
than a
decade. It was Brantley Coile himself who used this technology as the base framework
for his then new startup company, Coraid (later rebranded to the Brantley Coile
Company). Since then, that same framework has been open-sourced under the General
Public License version 2 (GPLv2) and made available to the general public.
&lt;/p&gt;

&lt;p&gt;
What makes AoE attractive is mainly its simplicity. It was written to run in the Data
Link Layer (Layer 2) of the networking OSI model. This means that it's not impacted by
any of the Internet Protocol (IP) overhead (that is, the Network Layer or Layer 3).
Translation: block devices exported via AoE cannot be accessed by IP. Without this
additional overhead, network performance does improve when accessing the exported
block device(s). This non-routability also adds to the security of the technology. In
order to access the volumes, you need to be physically plugged in to the Ethernet
switch hosting them.
&lt;/p&gt;

&lt;p&gt;
As for how data is transferred across the line, AoE encapsulates traditional ATA
commands inside Ethernet frames and sends them across an Ethernet network as opposed
to a SATA or 40-pin ribbon cable.
&lt;/p&gt;

&lt;p&gt;
For the following example, you are required to have at least two computing nodes
running any Linux distribution. One of these nodes will export the block devices. This
node will be referred to as the Target. The second node will import the block devices
and will be referred to as the Initiator.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
Configuring the Target&lt;/span&gt;

&lt;p&gt;
Most modern Linux distributions will provide binary packages to the entire AoE suite
in their local repos, but if you prefer to install from source, you can visit the
&lt;a href="https://github.com/OpenAoE"&gt;OpenAoE project's new home on GitHub&lt;/a&gt;.
&lt;/p&gt;

&lt;p&gt;
To configure the Target, you must install the AoE server dæmon: vblade. On a
distribution like Debian or Ubuntu, run the following command:

&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/mastering-ata-over-ethernet" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 24 May 2017 12:07:28 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1339401 at https://www.linuxjournal.com</guid>
    </item>

  </channel>
</rss>
